Information preservation — The idea

0
Photo credit: ID 422737, via Pixabay.

I’m reviewing Jason Rosenhouse’s new book, The failures of mathematical anti-evolutionism (Cambridge University Press), serial. For the full series so far, go here.

Rosenhouse devotes a section of his book (sec. 6.10) to the preservation of information, and precedes it with a section on artificial life (sec. 6.9). These sections betray such ignorance and confusion that it is better to wipe the slate clean. I will therefore highlight some of the key issues with the Rosenhouse exhibit, but will mainly focus on providing a brief history and curatorial summary of the information, together with references to literature, so that readers can determine for themselves who is blowing the smoke and who has the beef.

blow smoke

Rosenhouse’s misunderstanding of information retention becomes evident in his dash with artificial life. Anyone who has understood information retention recognizes that artificial life is a wild ride. Still, Rosenhouse’s support for artificial life is absolute. The term artificial life has been around since the late 1980s, when Christopher Langton, working at the Santa Fe Institute, promoted it and edited a conference report on the subject. I was working in chaos theory at the time. I followed the Santa Fe Institute’s research in this area, and so, as a side benefit (if you can call it that), I witnessed first-hand the initial surge of enthusiasm for artificial life.

Artificial life is made up of computer simulations that produce realistic virtual things, often via some form of digital evolution that mimics selection, variation, and heredity. The field has had its ups and downs over the years, first generating a lot of excitement and then losing it when people started asking “What does this have to do with real biology?” , After which people forgot about those nagging concerns, after which a new generation of researchers got excited about them, and so on to repeat the cycle. Rosenhouse, it seems, represents the latest wave of enthusiasm. As he writes: “[Artificial life experiments] are not so much simulations of evolution as examples of it. By observing such an experiment, you are watching real evolution take place, albeit in an environment in which the researchers control all the variables. (p.209)

Smuggled information

Information retention, as developed by my colleagues and myself, arose as a reaction to such artificial life simulations. We have found, in analyzing them (see here for several analyzes we have done of specific artificial life programs such as Avida, which Rosenhouse praises), that the information that researchers claimed to derive from these programs did not never been invented from scratch and have never been equivalent. to any genuine increase in information, but rather always reflected information entered by the researcher, often without their knowledge. The information was therefore smuggled in rather than created by the algorithm. But if information smuggling is a feature rather than a bug of these simulations (which it is), it undermines their use to support biological evolution. Any biological evolution worth its salt is meant to create new biological information, not simply redistribute it from existing sources.

For my colleagues and I at the Evolutionary Informatics Lab (EvoInfo.org), it therefore turned into a game of finding where information supposedly obtained for free in these algorithms had actually been slipped surreptitiously (like in the shell game find the pea). The case of Dave Thomas, a physicist who wrote a program to generate Steiner trees (a type of graph for optimally connecting points in certain ways), is instructive. Challenging our assertion that programmers were always putting as much information into these algorithms as they were getting out of them, he wrote: frontloading is happening.

We found the code snippet, which included the incriminating comment “over-ride!!!” Here is the code snippet:

x = (double)rand() / (double)RAND_MAX; num = (int)((double)(m_varbnodes*x); num = m_varbnodes; // override!!!

As we explained in an article on Thomas’ algorithm:

The claim that no design was involved in the production of this algorithm is very difficult to maintain given this section of code. The code selects a random number for the number of exchanges; however, immediately afterwards it discards the randomly calculated value and replaces it with the maximum possible, in this case 4. The code is marked with the comment “override!!!”, indicating that this was Thomas’s intention. It’s the equivalent of saying “go east” and a moment later change your mind and say “go west”. The most likely event is that Thomas was unhappy with the initial performance of his algorithm and therefore had to modify it.

A famous algorithm

We’ve seen this pattern, in which artificial life programs snuck in information, repeated over and over. I saw it for the first time while reading The blind watchmaker. There, Richard Dawkins touted his famous “Weasel algorithm” (which Rosenhouse wholeheartedly embraces as capturing the essence of natural selection; see pp. 192-194). Taken from Shakespeare Hamlet the target sentence METHINKS IT IS LIKE A WEASEL, Dawkins found that if he tried to “evolve” it by randomly varying the letters while needing all of them to spell the target sentence at once (compare simultaneous mutations or tossing all the coins at once), the improbability would be enormous and it would practically take forever. But if instead it could vary the letters a few at a time, and intermediate sentences sharing more letters with the target sentence were in turn subjected to further selection and variation, then the probability of generating the target phrase in a manageable number of steps would be quite high. high. Thus, Dawkins was able to generate the target sentence on average in less than 50 steps, which is far less than the 10^40 steps needed on average if the algorithm were to climb Mount Improbable by jumping it in one fell swoop.

Dawkins, Rosenhouse, and other fans of artificial life consider Dawkins’ WEASEL a wonderful illustration of Darwinian evolution. But if it illustrates Darwinian evolution, it illustrates that Darwinian evolution is full of intelligently grasped prior information, and therefore actually illustrates intelligent design. It shouldn’t be controversial. Insofar as it is controversial, to this extent Dawkins’ WEASEL illustrates the delusional power of Darwinism. To see through this example, ask yourself where does the fitness function that evolves the intermediate sentences into the target sentence come from? The fitness function in question is the one that assigns the highest fitness to METHINKS IT IS LIKE A WEASEL and variable fitness to the sentences in between depending on how many letters they have in common with the target sentence. Obviously, the fitness function was built based on the target phrase. So all of the information about the target phrase has been baked into – or, as computer scientists would say, hard-coded – into the fitness function. And what is hard coding but smart design?

But there is more

The fitness function in Dawkins’ example is gradually sloping and unimodal, thus making intermediate sentences gradually move towards the target. But for any sequence of letters of the same length as the target sentence, there is a fitness function exactly parallel to it that will make the intermediate sentences evolve to this new sequence of letters. Additionally, there are many other fitness functions besides these, including multimodal functions where evolution can get stuck at a local maximum, and some that are less smooth but still hit the target phrase with a reasonably high probability. The point to understand here is that rigging fitness functions to hit a target streak is even more complicated than just going straight to the target streak. It is this idea that is the key to the preservation of information.

I started using the term retention of information in the late 1990s. Yet the term itself is not unique to me and my colleagues. Nobel Prize-winning biologist Peter Medawar introduced it in the 1980s. By the mid-1990s computer scientists were using the term and similar language. We might not all mean exactly the same thing, but we were all in the same ballpark. From 1997 to 2007, I preferred the term shift to the retention of information. Displacement addresses the problem of explaining one piece of information by another, but does nothing to elucidate the origin of the information in question. For example, if I am explaining a Dürer woodcut with reference to an inked woodblock, I have not explained the information in the woodcut, but simply moved it to the woodblock.

Move information

Darwinists are on a mission to move information. Yet when they do, they usually act in complete innocence and claim that they have fully reported all the information in question. Moreover, they enlighten anyone who suggests that biological evolution faces an information problem. Information follows precise accounting principles, so it cannot magically materialize as Darwinists wish. What my colleagues and I in the Evolutionary Computing Lab have found is that, aside from intelligent causation, attempts to explain information do nothing to alleviate, and may even intensify, the problem of explanation of the origin of the information. It’s like filling a hole by digging another, but where the newly dug hole is at least as deep and wide as the first (often more). The only exception is that pointed out by Douglas Robertson, writing for the journal of the Santa Fe Institute Complexity in 1999: the creation of new information is an act of free will of intelligence. It’s consistent with smart design. But that is forbidden to Darwinists.

Next“Information Preservation – The Theorems.”

Editor’s note: This review is published with the permission of the author of BillDembski.com.

Share.

Comments are closed.