Claim 6 - Random processes cannot create order

So many times I've seen arguments claiming that evolution of a living human being is rather like a totally random process such as sending a tornado through a junkyard and ending up with a Mercedes. This analogy is utterly false. First I'll explain a little about randomness in science, and then I thought I'd try a simple analogy with some of the numerical modelling work that I've been doing recently.

Many people use the theory of thermodynamics, claiming that this completely disproves evolution by showing that you cannot get order from chaos. In fact, what the second law of thermodynamics says is this;
The total entropy of any closed system cannot ever decrease over time.
Entropy is a word used in chemistry and physics to describe many properties of physical systems, and closely linked to the concept of disorder. Its exact definition is related to number of different configurations of a physical system. A good example of this is water. When water is frozen the particles have very little space in which to move, therefore they have few configurations. This means that the entropy of an ice cube is low. As the ice melts, the solid water turns into liquid water, and the water molecules now have far more positions to occupy so the entropy increases. Similarly for evaporation - entropy increases again. So one might say that, if we take an ice cube and thermally isolate it from the rest of the universe, then on heating that ice cube from freezing through melting and boiling into water vapour one has increased the entropy of the closed system as predicted.

So what about freezing?

Well Creationists argue that the second law says that we cannot ever decrease entropy. So therefore it should be totally impossible to freeze water, right? Well clearly it isn't. Freezing water reduces the entropy of the water, but yet this is not in disagreement with the second law of thermodynamics. Why? Because the law states that we require a closed system. That is, a system which is not in thermal contact with the rest of the universe. In order to freeze a block of ice, one must somehow take away the heat energy from the vapour so that it cools to liquid, and then one must continue taking away heat energy until the liquid freezes. So where did the entropy go? Simple - it has been moved by whatever mechanism we used to cool down the water. It hasn't vanished - you can't destroy energy - it has simply been moved elsewhere. This is how fridges and freezers work - they shift the hot air from inside to the outside. If you then examine the entire system, you see that the entropy has indeed increased, as predicted, because the heat energy from the cooling water is now redistributed elsewhere.

So how does this apply to evolution? Well we've just shown that it is very possible to get order out of disorder - which goes exactly against what the creationists argue, and totally agrees with evolution. For those who have not yet decided to accept this point, consider the many other examples of order out of chaos in the natural world. A good example is a star, like our Sun. Now the Sun forms from an enormous, chaotic ball of gas, which then collapses and reduces in size by many orders of magnitude until it is the size of a star, when nuclear fusion begins. That, to me, is taking chaos (the initial gas cloud), and creating order out of it (the star). How did it happen? Well by gravity, of course. It's easy to create order out of chaos so long as you have a driving force for that change. In evolution we have natural selection as that driving force.

Still not convinced? Consider also snowflakes, crystals, precious gemstones or even the growth of plants and animals. All of these seemingly create order out of chaos, just like evolution, but in fact they are all very easy to explain.

Evolutionary Computation - An Example

I'm going to describe the technique of evolutionary computation. Now this is rather like biological evolution in many ways. It is a technique that is used an awful lot in numerical calculations throughout all of science, maths, engineering and probably much further afield than that too. I am also part of a large research group at the University of Birmingham, which is one of the world leaders in this technique, so I am clearly very familiary with the process. It is partly because of my work on evolutionary computation that I am so convinced by the power of natural evolution. The problem is this : We have some challenge that we wish to solve. This might be a mathematical function, where we want to find the maximum point, or it might be a schedule, where we want to find the optimum use of time. In fact, it can be pretty much anything provided you can define three simple properties:
In most of these problems, we're dealing with an absolutely vast solution space. By that, I mean that it would be impossible to search through all of the potential solutions in turn, looking for the optimum. For an example, I'll take a simple, easy to visualise problem of finding the highest point in some terrain map. If we consider a terrain that is, say, 100km on each side, then you can see that a naive algorithm would have a lot of searching to do in order to find the highest one metre square patch! In fact, the chances of finding that patch at random are one in ten billion. But we can find it much more efficiently than that!

Now imagine that you're a hiker lost in some mountains in dense fog. You want to find the maximum point. If you just wander randomly around then what is the chance of finding the highest point? Pretty low. So let's introduce an evolutionary technique. Let's say you have a friend on the US military with access to a height-mapping satellite. Let's propose an evolutionary algorithm to solve this simple problem using this piece of technology and a bit of chance. In each step, we're just using random numbers to do the decision-making, but you'll quickly see how order evolves!

So let's just say we have a satellite which can measure terrain height in a very small area, say 1 metre square. We choose a few speculative locations on our terrain, say 100, and measure their heights above sea level. This is our first 'generation' of solutions. Then we decide which areas are most promising. Those locations which turned out to be in low-lying swamps at 5 metres above sea level might be discarded altogether, but those at 800 metres high are probably along the right lines, especially if there are a lot of similarly high points nearby.

We perform a second step by taking measurements at new positions. The new positions are chosen randomly, but based on those locations which looked most promising before. We randomly position our 100 'second-generation' points by choosing promising locations and expanding near them. The higher the height of a location, the higher the probability of placing new sample points nearby. For example, if we found one point at 800m then we might want to place five or six new points scattered randomly around this one original. However, the point at 200m might only have one new 'child' neighbour, and the point at 20m will have none. Once we have placed 100 new points then we measure their heights, and repeat the process. We have now replaced our original 100 random guesses with a second population which we expect, on average, to do much better than the first.

This is much like evolution. Random copying variations in the genotype of a species cause some members of each population to be slightly more or less fit to survive than their neighbours. If that fitness improves the chances of that one organism then it will almost certainly survive. (We could add in a bias probability here, assuming that some of these would still die, but the overall result would clearly be the same.) However, if the mutation makes them less fit then the chances are they would die out. The more unfit they are the more likely they would be to die out.

So here I'm making the following analogy;

  1. Taking measurements randomly across the terrain = A population of individuals with different genotypes (and hence, phenotypes).
  2. The highest points have a larger influence on the next generation = The fittest individuals are most likely to survive long enough to pass on their genes.
  3. Low points are very likely to be ignored for the next generation = Individuals with low fitness are very unlikely to survive to pass on their genes.
  4. We're looking for the highest point on the landscape = we're looking to evolve the fittest individual.

I have used this method a lot, and it is extremely efficient. The example I gave was a 2D example (x,y) with a 2D function (height at location x,y). I've dealt with hundreds of dimensions in my work, for example with model fitting. Evolutionary computation has been applied to the travelling salesman problem where there can be thousands of nodes and an unimaginably large number of possible solutions. It works extremely well, finding a very strong solution extraordinarily quickly. It doesn't always find the best solution, but then again, neither does nature!

At every step in the iteration, the code generates totally random steps. These steps are then accepted or rejected using the probability that I explained above. This is analogous to evolution where crossover, DNA mutations and splicing cause random changes in the dominant characteristics of individuals in a population. Those which improve the fitness are far more likely to survive. Furthermore, parents which have high fitnesses are more likely to survive for long enough to influence the subsequent generation.

I have seen my code work extremely well with an overwhelmingly complicated dataset. For example, a travelling salesman problem (TSP) with 20 cities has a potential solution set of 60 million billion potential tours. Working on that one-by-one, at one million per second, would take around two thousand years. Evolutionary computation can solve the problem in a few seconds with ease, probably in half a dozen computational generations with a population of a hundred individuals!

It's difficult to compare the two ideas, but it's clear to me that the process is the same. However, natural evolution has had a few billion years to do the job. That's a few billion iterations if the timescale between generations is about a year. (Of course, early on it would probably have been enormously shorter than this, perhaps hours or even minutes for a bacterium, for example.) In addition, instead of making one random step in each generation, you're making literally trillions, one for each creature of a species. If you consider the early single-cell organisms then 'trillions' is certainly an underestimate!

So perhaps you see now why these arguments that randomness cannot possibly create order are completely false. Of course it can, if the randomness is only a driver to a process which introduces a strong selection bias into the results so that improvements are preferentially selected. So anyone who says that the chance of assembling a complete human DNA by random process is one in however many trillion is probably correct - but that's irrelevant - it's not what's happening! That's not what evolution did. It didn't just get three billion base pairs and randomly put them together - it iterated the solution over billions (probably many trillions) of generations, with trillions of organisms at every step each performing their own iterative steps. Does it still seem all that implausible to you? Fascinating, marvellous, even breathtaking, but certainly not implausible. In fact, I'd go so far as to say that it is most definitely not only plausible, but extremely likely.

One interesting result from this analogy is that it is totally possible to end up with a smaller peak that is not the highest. Let's say that, just by luck, our first generation managed to land one reading on the highest point in a sub-optimal peak. We would then concentrate a large majority of our subsequent readings around this one point, and possibly miss the global optimum that we had not yet found.

Actually applying this theory to animals themselves is not necessarily valid - the natural world is a co-evolving system, which means that the fitness of any one organism is related to those around it. It isn't a fixed value. For example, a lion is justifiably regarded as the king of the beasts when in the middle of the Serengeti, but probably wouldn't fare too well if transported to Antarctica, or the Pacific ocean. Similarly, a killer whale wouldn't survive long in Tanzania and a Polar Bear has remarkable difficulty with perching in trees. They are all adapted to life in a certain ecological niche, but you could not assign a 'fitness' value to a living organism without specifying that precise environment too. And that includes not just the climate and geography, but also the other animals in the neighbourhood. For example, a cheetah would find it remarkably difficult to survive if there were no small herbivores to chase (and eat).

However, this theory of suboptimal peaks can be applied to individual biological elements. Human eyes, for example, are not as efficient as they could be - the optical receptors are plugged in backwards! However, to move from the current form to a much better design would require stepping through a large number of much worse intermediate stages where individual receptors were gradually rotated through the intermediate stages. It is said that evolution is blind - it has no final goal in sight - it just surges onwards. Those intermediate stages would be less fit than their competitors, and would therefore die out despite the fact that they were on the way to a complex improvement - a distant peak! To continue the analogy, we would never 'cross the valley'. Besides, in this case the evolutionary pressure to develop a significantly better vision system isn't particularly strong. Assuming our ancestors evolved in deep jungles, we wouldn't need very good distance vision because we would never be able to see very far anyway! However, for a hawk, being able to spot a tiny vole at the range of hundreds of metres is clearly a very considerable benefit.

Think about rolling 100 six-sided dice. Let's say you keep the sixes, and roll the rest again. If you repeat this a few times, only re-rolling the dice that are not already showing sixes, then you're sure to get a six on every die eventually. The chance of rolling one hundred sixes in one go is so small to be effectively zero. However, if you use a random process and apply a non-random selection effect, i.e. removing the dice that have already rolled sixes, then you can easily create a highly non-random result.

If you don't yet see how random processes can create highly non-random results then please write to me for more explanation. The claim that evolution is completely random is a misunderstanding at the very heart of most negative views of evolution, and I know that it is the keystone in helping a large number of people to accept this beautiful theory.

Also, for those of you who program in C, here's a very short piece of code that demonstrates how the processes behind natural evolution can solve extremely unlikely problems.



Is this a fair representation? If not then drop me an email. Address below.



This page maintained by Colin Frayn. Email .
Last Update : 2nd December, 2005