MONKEY

by David C. Wise
Written 1990
Originally posted in the Science & Religion Library on CompuServe


A. S. Eddington. The Nature of the Physical World: The Gifford Lectures, 1927:
... If I let my fingers wander idly over the keys of a typewriter it might happen that my screed made an intelligible sentence. If an army of monkeys were strumming on typewriters they might write all the books in the British Museum. The chance of their doing so is decidedly more favourable than the chance of the molecules returning to one half of the vessel.

Douglas Adams. The Hitchhikers' Guide to the Galaxy, Fit the Second:
Arthur: "Ford, there's an infinite number of monkeys outside who want to talk to us about this script for Hamlet they've worked out."

RFC 2795: The Infinite Monkey Protocol Suite (IMPS), MonkeySeeDoo, Inc., 1 April 2000

Abstract

This memo describes a protocol suite which supports an infinite number of monkeys that sit at an infinite number of typewriters in order to determine when they have either produced the entire works of William Shakespeare or a good television show. The suite includes communications and control protocols for monkeys and the organizations that interact with them.

Robert Wilensky, 1996:
"We've heard that a million monkeys at a million keyboards could produce the complete works of Shakespeare; now, thanks to the Internet, we know that is not true."

Lennon and McCartney:
Everybody's got something to hide, except for me and my monkey!


Development History and Issues

  • 1990 September 10:
  • Original design of MONKEY in Turbo Pascal. This makes MONKEY a 16-bit application, something that will cause issues much later. It works just fine at the time.

  • 1995 December 18:
  • After a few years, machines kept getting faster and faster until a bug in Turbo Pascal's start-up code caused an overflow which would crash the program. Basically, the Turbo Pascal's timing functions used timing loops that would be calibrated during start-up; it would count how many times it looped between two timing interrupts. When PCs finally got too fast, that loop counter would overflow, causing the crash.

    After a couple years, I found a patch on CompuServe that fixes this bug and I was able to create and release a fixed version. Ran just fine on a 1-GHz Pentium III machine.

  • 2011 August 02:
  • After more than a decade, I tried running MONKEY on my Windows XP machine and it ran ... , but it didn't look right. I suspect that hardware speed issues might have caught up with it again. I think that it is running, but it's just running so fast now that no elapsed time can register.

    At any rate, since MONKEY is a 16-bit application, it will only run on a 16-bit or a 32-bit machine, but not on a 64-bit machine like Windows 7 64-bit -- I just tried to run it and it doesn't. Looks like it's time for me to rewrite it in C, C++, C#, or Java. Probably in Java so it can run here as an applet. OK, that's on my to-do list now.

  • 2015 October 08:
  • I finally converted the Pascal source for MONKEY to C. The new executable, monkey.exe, is 32-bit, so it will run on a 64-bit system. I have added it the ZIP file and renamed the old executable to MONKEY_OLD.EXE. I have also added the C source file, even though I have not had the time to clean it up and make it more presentable. My apologies for that.

    I compiled it with MinGW gcc set to link the libraries statically, so it should not need the distributable runtime. If you have problems running it, then please let me know so that I can try to remedy the problem.

    Also please note that if you want to play with the source and recompile it, you will need to find a copy of conio.c. If you are also using MinGW gcc, it should be in your tool-chain, most likely in the INCLUDE directory.

    I am trying to also convert MPROBS to C, but without success. The Turbo Pascal version emulates an extended floating-point format which I seem to recall is a format that the numeric co-processor used. Unfortunately, none of my C/C++ compilers support it, which causes excessive round-off errors that completely mess up the output. C's long double data type would probably work and my compilers do recognize it, but they define it to be the same as a double, so no joy.

    I apologize for any inconvenience this long delay may have caused. I still work for a living as a software engineer and so can only work on my many personal projects in my "copious spare time." (if you're also an engineer, you will understand that inside joke)


    The Genesis of MONKEY

    One project that I took on immediately after reading The Blind Watchmaker was suggested by his Chapter 3, "Accumulating Small Changes". In it, Dawkins described two different models for selection of randomly generated order:

    1. Single-step Selection
    2. In single-step selection, the entire final product is generated at one time and must match the target in order to succeed. If it fails, then the next trial must start all over again from scratch. The probability for single-step selection to succeed is very small: my own example's probability is of the order 10-36 and would take about 1028 years of independent trials on a supercomputer (eg, capable of one million trials per second -- about 1,000 times faster than a PC in 2017) in order to have even odds of succeeding. This method of selection has nothing whatsoever to do with evolution, yet it is the one that creationists always use to "model evolution". It actually models their own position: creation ex nihilo.

    3. Cumulative Selection
    4. In contrast, cumulative selection is an iterative method. You start with a randomly generated attempt, but when that fails instead of throwing it away you make multiple copies of it with slight random changes ("mutations"), so that those copies are very similar to, yet slightly different from, the original, analogous to what happens in nature. Then you select the copy that comes closest to the target and use it to generate the next "generation" of copies. And so on. Obviously, this method better models living populations and natural selection. The probability of success is astoundingly better; instead of taking millions of billions of years, it succeeds in less than half a minute -- consistently, repeatedly, without fail.

    In The Blind Watchmaker Dawkins' target string was a single line from Shakespeare's Hamlet, "Methinks it is like a weasel" (Hamlet). For my own experiment I chose the alphabet in alphabetical order, though in its final form I allow the user to enter whatever target string he wants to. And I renamed it to MONKEY because of the Eddington quotation above, though he wasn't talking at all about evolution, but rather about thermodynamics (see Infinite Monkey Theorem). Other than that, I used Dawkins' description of the program as my design specification, such that it has been described as the most faithful renderings of his original program. This has become important, since most of the criticisms of WEASEL and MONKEY are based almost solely on misunderstanding how they work: namely, they claim that when a correct letter appears then it is locked in and not subject to change, whereas in both WEASEL and MONKEY all letters are always subject to being changed, including correct ones.

    So, back in 1989 when I first read about WEASEL in The Blind Watchmaker, I simply could not believe it. I wrote MONKEY in order to see it for myself and to test what Dawkins was saying. When I could see it for myself, I still could not believe it. When I showed it to a fundamentalist co-worker (third generation, so you could still carry on a conversation with him) I first showed him the abysmal probability of single-step selection and the billions of years it needed, described cumulative selection briefly, then ran MONKEY. His jaw literally dropped when MONKEY succeeded in 30 seconds.

    OK, I still couldn't believe it. It was still too good to be true. So I undertook a study of the problem which calculated the actual probabilities. That became MPROBS ("Monkey PROBabilitieS"). In one typical case, the probability of success within 80 generations is over 99.99%. In other cases with smaller population sizes, the probabilities of success within 100 generations are still relatively high and increase noticeably with each successive generation. As low as the probabilities are for any single string succeeding in making that one step closer to the solution, and even though that probability becomes smaller as it gets closer to the solution, since we are working with a population of attempts then we're dealing with the probability that at least one of them succeeds in advancing that one step. To put it a different way, there is a probability that none of them will succeed and even that they will all back-slide away from the solution (and indeed you can observe that happening when you run MONKEY). But that probability of the entire population failing becomes smaller as the population size increases and even smaller with the requirement that every single generation must fail. It gets to a point where that probability for failure becomes so small that its inverse, the probability of success, because greater, even inevitable.

    When I completed my study, I packed everything into an archive file (I think we were still using PKARC at that time) and uploaded it to a couple libraries on CompuServe, including the Science & Religion section of the Religion Forum, which is where we discussed creation/evolution. For the next seven years that I remained on CompuServe, MONKEY was downloaded at least once a month every single month.

    Now we can demonstrate that creationist probability arguments are wrong for at least two reasons:

    1. Their description of evolution being change through pure random chance is simply not true and very misleading; natural selection can be very deterministic and not at all random.

    2. They are using the wrong probability model to describe natural selection. Natural selection uses cumulative selection, not single-step. And there is an enormous difference between the probabilities of cumulative versus single-step selection.

    By working on and analyzing MONKEY, I found a quantitative reason behind the statement that natural selection can make the improbable inevitable.


    Links:

    The original MONKEY distribution with a minor correction [*] and additions [**].
    Zipped using PKZIP. Files can be extracted with WinZIP.

    The files in this archive are:

    • MONKEY.DOC -- Text file explaining how to use MONKEY
    • MONKEY_OLD.EXE -- Original executable copy of MONKEY written in Pascal. DOS application. Runs on IBM PC or compatible.
      [*] Recently patched to run on newer machines [***].
    • MONKEY.PAS -- Turbo Pascal source file for MONKEY used to build MONKEY_OLD.EXE. Read it to see how MONKEY works.
    • MPROBS.DOC -- Text file containing a discussion of the probabilities involved in MONKEY and a description of how MPROBS works.
    • MPROBS.PAS -- Turbo Pascal source file. Calculates the probabilities for cumulative selection within a population. Used to generate the data for MPROBS.DOC. Quick-and-dirty "user interface" requires that you edit the file with new parameters and recompile each time you use it.
    • README.MNK -- MONKEY README file. This file.
      [**] Additions to the distribution [***].
    • MONKEY.EXE -- New executable copy of MONKEY written in C and built with MinGW gcc. DOS application. Runs on IBM PC or compatible.
    • MONKEY.C -- C source file for MONKEY used to build MONKEY.EXE. Read it to see how MONKEY works.

    [***] -- Read the Development History and Issues notice at the top of this page.

    Monkey Probabilities (MPROBS)
    HTML'ized version of the MPROBS.DOC file distributed in MONKEY.ZIP.

    "Almost Like a Whale" (broken link)
    This was a web page in which Ian Musgrave had collected several implementations of Dawkins' WEASEL, including my own modest effort, MONKEY.

    Addendum:
    2001 September 05
    Footnotes added 2017 January 20

    When I first uploaded my MONKEY to the Biology and Science & Religion libraries on CompuServe in 1989, my intention was not only to share it with everybody, but also to have others review it for errors, especially in the math. I wanted to be sure that I hadn't made any mistakes in deriving all those probabilities -- especially the Markov chains which I had applied almost in a cookbook fashion.

    Over the years, until I had to leave CompuServe in 1997 because their new "improvements" crippled their service beyond usability, I saw that MONKEY was being downloaded fairly constantly (at least once a month). However, I received almost none of the feedback I had sought. Almost all of what little I did receive was not relevant and exhibited confusion about what the program was trying to do and attacked it for things that it wasn't even trying to do. The only relevant feedback was that in MPROBS' output the numbering of the generations is off by one.

    In order to dispel some of that confusion, I offer the following:


  • Please be aware that MONKEY is not a full-scale simulation of evolution, nor of natural selection, although either would be an interesting project. It does not simulate the selection process itself. It does not simulate the sources of variability. It does not simulate the viability of the offspring.

    Rather MONKEY concentrates on the method of selection that natural selection employs, namely cumulative selection. It explores the power of cumulative selection by contrasting it with the common alternative, single-step selection. To do so, MONKEY holds all other factors constant. Thus, selection is performed on the same basis for both methods (i.e. selection is based on similarity to the target string, letters are generated randomly, etc.). By holding everything else constant, the only difference is in the selection methods themselves. Therefore, the difference in the performance of the two methods is due to the nature of the selection methods themselves. The reason for this difference is analyzed in MPROBS.DOC.


  • The primary function that MONKEY serves is to perform a comparison between these two methods of selection.

    Comparing these two methods is all the more appropriate since single-step selection is the method that creationists usually attribute to evolution whereas cumulative selection is much more descriptive not only of natural selection but also of how life itself propagates.


  • A secondary function that MONKEY serves is to examine the question of high rates of mutation.

    After the first version of MONKEY, I added a new feature to test a common creationist statement that evolutionists would want to increase the rates of mutation in order to speed up evolution, so they should want all kinds of mutagens dumped into the environment. Of course, this makes the false assumption that mutation is equivalent to evolution (wrong, wrong, wrong!) and by running it through MONKEY I found that the results of increased mutation rates would indeed be undesirable.

    The normal, benchmark mutation rate in MONKEY has been set to one (1) letter per copy. With the new feature, you can allow multiple letter positions to be randomly selected for mutation. However, instead of speeding up the process, it actually slowed it down greatly. I haven't had the time to work out the probabilities of allowing multiple letters to change in each string, but I have found empirically that the more we allow each copied string to change, the longer it takes MONKEY to produce the target string. This agrees with Dawkins' emphasis on a series of small genetic changes being more effective than large sudden changes. This is actually not contrary to the "abrupt" change in punctuated equilibria, because punctuationalist theory calls for episodic change that is very abrupt on the geological time scale, but on the generational time scale they are still gradualists.

    Thus, MONKEY has made us aware of an interesting apparent paradox: stasis is actually more important to evolution than mutation is.

    The probability calculations in MPROBS help us to appreciate why this is so: Even though the probability of a single individual progressing towards the target is slim, the probability that nobody in a sizable population will progress is even slimmer. Thus the population as a whole will tend to advance towards the target. But if you increase the number of changes, then you also increase the probability that nobody will advance and even that everybody will lose ground. Thus you decrease the probability that the population would advance and you slow the entire process down.

    You need stasis to keep your current gains, then just a little bit of change to offer the opportunity to advance. Too much change too quickly can cause you to lose your place. Before MONKEY, I hadn't realized that.


  • One concern expressed about MONKEY was whether the selected string would be "viable." As I have already said, MONKEY doesn't try to simulate viability, so that question is handled abstractly. However, this concern appears to be somewhat of a moot point to me. Consider that Darwinian (and neo-Darwinian) evolution deals with the descendants of pre-existing species changing over time to become better-adapted to their environment (i.e. more "fit"). The parent species had to be well-enough adapted to their environment, i.e "viable," in order to survive long enough to produce offspring; if that parent species with which we start is not viable, then the experiment stops right there. So any offspring that is better-adapted (determined in MONKEY by its similarity to the target) should also be more viable than its parent.

    Otherwise, the only way to address the question of viability would be to try to simulate it, which would be an entirely different project.


  • Objections have been raised about using a target string being a form of teleology; eg from an email I received:
    "Any complexity is provided by the experimenter. Dawkins' model with the weasel words is designed so that it will *always* hit the target. This is the equivalent of teleology, guided development, which means that either the organism itself or some outside controller 'knows' what it is planning to be."
    First and foremost, it must be remembered that the exact same selection criteria were used for single-step selection as were used for cumulative selection. MONKEY and WEASEL were controlled experiments for comparing the results generated by two different selection methods: single-step selection and cumulative selection. Standard procedure in controlled experiments is to perform the experiment twice while changing one and only one factor, the factor you are testing, and keeping everything else the same. Then, it is assumed, any differences observed are due to that factor.

    If this foreknowledge of the target, this "teleology", were the cause of MONKEY's success, then why doesn't single-step selection "*always* hit the target" as well, instead of missing it by light-years? Why is there so much difference between the results of the two selection methods? The reason is inherent in the selection methods themselves, as analyzed in MPROBS.DOC. The immense difference in results is due to the methods themselves.

    This objection also misses the point that selection happens. Whether performed/guided by an external intelligence or not, selection happens. Artificial selection is applied by humans trying to direct the changes in their domesticated crops and livestock. Darwin took that practice as an analogy for what nature was doing and called it "natural selection". The central theme of Dawkins' "Blind Watchmaker" was that natural selection is blind to the outcome. Dawkins has been criticized for having forgotten this in presenting his WEASEL, but in saying that his critics only demonstrate that they had not read what Dawkins had written:

    "Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn't like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. If, after the aeons, what looks like progress towards some distant goal seems, with hindsight, to have been achieved, this is always an incidental consequence of many generations of short-term selection. The 'watchmaker' that is cumulative natural selection is blind to the future and has no long-term goal."
    (Blind Watchmaker, pg 50)
    This "teleology problem" is more apparent than real. Certainly, a better simulation of evolution or natural selection would make use of immediate selective pressure arising from the environment itself. Indeed, better simulations have been written and run and have demonstrated that natural selection works.

    But then neither MONKEY nor WEASEL was ever billed as a simulation of evolution nor of natural selection. All they were ever intended to do was to compare two different kinds of selection, which they have done rather well. The significance of that comparison is that cumulative selection, the method that was modelled after the way that life works and that natural selection is understood to work, functions extremely well, whereas single-step selection, the method that has nothing at all to do with how either evolution or life itself works, fails miserably.

    The added significance of this comparison to the creation/evolution discussion is that creation science routinely misrepresents evolution as using single-step selection (which ironically is the selection method of their own model, creation ex nihilo) and creationists neglect to tell their audiences that there is another method which far better describes what life actually does. And in making that omission, the creationists are deceiving themselves and their audience.


  • There has somehow arisen in the creation science literature a common misconception about how both WEASEL and MONKEY work. Here is how "intelligent design" advocate William Dembski describes it:
    He starts with a target sequence taken from Shakespeares Hamlet, namely, METHINKS IT IS LIKE A WEASEL. If we tried to attain this sequence by pure chance (for example, by randomly shaking out scrabble pieces), the probability of getting it on the first try would be around 1 in 1040, and correspondingly it would take on average about 1040 tries to stand a better than even chance of getting it.12 Thus, if we depended on pure chance to attain this target sequence, we would in all likelihood be unsuccessful. As a problem for pure chance, attaining Dawkins' target sequence is an exercise in generating specified complexity, and it becomes clear that pure chance simply is not up to the task.

    But consider next Dawkins' reframing of the problem. In place of pure chance, he considers the following evolutionary algorithm: (1) Start with a randomly selected sequence of 28 capital Roman letters and spaces (thats the length of METHINKS IT IS LIKE A WEASEL); (2) randomly alter all the letters and spaces in the current sequence that do not agree with the target sequence; (3) whenever an alteration happens to match a corresponding letter in the target sequence, leave it and randomly alter only those remaining letters that still differ from the target sequence. In very short order this algorithm converges to Dawkinss target sequence. In The Blind Watchmaker, Dawkins recounts a computer simulation of this algorithm that converges in 43 steps.13 In place of 1040 tries on average for pure chance to generate the target sequence, it now takes on average only 40 tries to generate it via an evolutionary algorithm.
    ("Can Evolutionary Algorithms Generate Specified Complexity", "Nature of Nature" conference, Baylor University, 21 April 2000) 1

    Dembski is rigging the results! He misrepresents the process as retaining any letters that happen to come up right and only changing the ones that are wrong. He is telling us that once a letter is correct, then it is never messed with again. Basically, he is telling us that Dawkins and I have rigged our results, that we are cheating!

    That is wrong!

    Dawkins did not describe any such condition in his WEASEL and I most certainly did not include it in my MONKEY, which has been described as being one of the most faithful renderings of Dawkins' description, which should come as no surprise since I used his description as my design specification. My source code is open for inspection. Every single letter in the string is equally subject to random change, regardless of whether it agrees with the target sequence or not. MPROBS.DOC's analysis includes the probability of a correct letter being replaced with an incorrect one, thus causing the sequence to backslide. When you run MONKEY with a small population (eg, 20), you can watch the sequence backslide at times, with correct letters being replaced with incorrect ones -- with computers getting faster and faster, it's becoming more difficult to see that happen. We are quite clearly not doing what Dembski is accusing us of!

    To be fair to Dembski, he probably did not originate that misrepresentation. The scenario of unchanging correct letters was presented two years earlier by a creationist, Royal Truman:

    Furthermore, one does not need a computer to understand and simulate [Dawkins'] argument. Simply envision 28 rings each with every letter of the alphabet and a blank space stamped on each ring, next to each other on a metal cylinder held horizontally. Spin all the rings one after the other or at the same time. Note the rings which show the characters or spaces facing you which match the target sentence. Spin the remaining unsuccessful rings until all the letters match the target.
    ("Dawkins' weasel revisited", Royal Truman, Creation Ex Nihilo Technical Journal 12(3):358–361, 1998, http://www.answersingenesis.org/docs/4057.asp)
    Furthermore, it appears that Royal Truman and possibly also Dembski may have gotten their mistaken ideas about WEASEL from philosopher of science Elliott Sober's book, Philosophy of Biology (1993), which to my knowledge is the earliest case of a description of WEASEL using locking rings to keep correct letters from changing, contrary to Dawkins' spec and my implementation.

    However, I would fault Dembski for not having done his research -- you know, the really basic practice of going back to the source, namely The Blind Watchmaker, to verify what it actually says, something that too many creationists fail to do.


  • FOOTNOTE 1:
    Glenn R. Morton reported on the "Nature of Nature" conference, an "intelligent design" event held in April 2000 in Waco, TX. Unfortunately, he has taken his web site down and we can no longer access his report. In his coverage of Dembski's presentation, Dembski "examined genetic algorithms and made many mistakes concerning their properties and the way they worked", such that, faced with "[h]ands ... upraised all over the room" by people who worked with genetic algorithms and knew better than what Dembski had told them, "Dembski had the deer in headlights look."

  • Working on a project to more fully simulate evolution would be interesting, if I had the time. In such a project we would need to define an environment, phenotypes that would interact with that environment as they try to survive, genotypes that would direct the development of those phenotypes, and rules for the mutation of those genotypes.

    The problems in developing such a simulation are considerable. All of these elements would need to be as realistic and as free from interference as possible. The criteria for fitness should not be predetermined arbitrarily but would have to come directly from the environment and the organisms' interaction with that environment. The embryonic development from genotype to phenotype should follow regular rules which could be arbitrary to some extent, but the phenotypes produced should not be predetermined, but rather be the result of the expression of the genotypes -- a software example of this is Dawkins' Biomorphs2. The mutation of the genotypes should be the easiest part of the project, once the genotypes have been defined. Of course, one of the greater problems would be how to evaluate the simulation; if we allow the model to be too abstract then the resultant environment and "organisms" could be so alien to us that we could not make any sense out of it.

    The closest that I have seen programs come to simulating selection based on the interaction of an organism with its environment are TBUGS3 and Dr. Ray Thomas' TIERRA4.

    In the meantime, I would still like to hear ideas for programs to simulate evolution and, if I should ever have to time to attempt such a project, I would definitely need ideas to work with. Of course, as I tell people who try to model evolution with single-step selection (like Michael Denton), we have to keep in mind just what we are trying to model.


  • FOOTNOTE 2:
    In the second half of the third chapter of The Blind Watchmaker, Dawkins describes a kind of computer game they had written to illustrate aspects of embryonic development. From that link:
    The program displayed a two dimensional shape (a "biomorph") made up of straight black lines, the length, position, and angle of which were defined by a simple set of rules and instructions (analogous to a genome). Adding new lines (or removing them) based on these rules offered a discrete set of possible new shapes (mutations), which were displayed on screen so that the user could choose between them. The chosen mutation would then be the basis for another generation of biomorph mutants to be chosen from, and so on. Thus, the user, by selection, could steer the evolution of biomorphs. This process often produced images which were reminiscent of real organisms for instance beetles, bats, or trees. Dawkins speculated that the unnatural selection role played by the user in this program could be replaced by a more natural agent if, for example, colourful biomorphs could be selected by butterflies or other insects, via a touch sensitive display set up in a garden.
    The book's appendix included an order form for that program. However, at the time it only existed for the Mac, which I have never owned, so I wrote my own version in Turbo Pascal to run in CGA graphics mode on MS-DOS. The program has since been ported to Windows and there exist open source versions.


    FOOTNOTE 3:
    Actually, only I call it TBUGS, since I had written it in Turbo Pascal (hence the "T" in TBUGS). It was based on an article I read in Scientific American, which is described in Dewdney's BugWorld, a software project page:
    In 1989, A.K. Dewdney wrote an article in Scientific American entitled "Simulated evolution: wherein bugs learn to hunt bacteria" as a part of the "Computer Recreations" column (May, pp. 138--141). The ideas in that article were included in his book Turing Omnibus (1989).

    The idea described in these works is a very simple artificial life experiment. A tauroidal landscape houses moving agents (which we will call "bugs") and immobile food elements ("bacteria") for the agents. The bugs are incapable of sensing their environment, but they do make a kind of "choice" regarding the direction they move. This choice is made by a simple distribution across six different discrete turning choices, defined by a set of genes. Bugs gain energy when they eat bacteria and burn energy when they move; however, a bug that runs out of energy will die (be removed from the simulation), and a bug that has sufficient energy and age will divide into two nearly identical copies. At the start, the bugs "jitter" around, turning randomly; however, they will often eventually evolve to glide around the world, scooping up bacteria in their path.

    That page includes a ZIP file containing the source code for a MASON applet -- again with the Macs! Googling on the article title, there's a page to buy a PDF of the article from Scientific American. There are also several programs based on the article, such as BugSim.

    Basically, you can set up the environment with different rules for how the food elements grow; eg, food grows fast, food grows slow, food only grows in one area, etc. When the bugs reproduce by fission (one bug having eaten enough becomes two), the new bug's genes can be mutated. Since the genes control how they move, the new bug could develop new movement behavior. Then how the food grows determines which movement behavior works the best and soon all the bugs have that behavior since the ones that didn't had starved and died off. If food grows uniformly, then you'll have "cruisers" that just move in a straight line to where there's more food (their "world" has wrap-around, so when a bug leaves one side of the screen it reappears on the opposite side). If food only grows in one area, then you get "twirlers" who move in tight circles in order to stay where the food is. I seem to recall (it has been nearly three decades, after all) that how fast the food grows also affects how fast the bugs move. And so on.


    FOOTNOTE 4:
    Dr. Thomas Ray's TIERRA was rather interesting -- go to that Wikipedia article for more information and for the link to the TIERRA Home Page. Its organisms are virtual computers which fed on computer resources (ie, memory, processing time). Each consisted of a short program which enables it to use resources and to reproduce. Two interesting results of the experients were:
    1. The co-evolution of parasites and hosts
    2. Parasites are entities that have lost the ability to reproduce on their own, so somewhat like a viruses (viri) they infect healthy entities in order to use their hosts' resources in order to reproduce. In response to the parasites, some hosts evolved code to resist a parasite attack and some hosts even evolved strategies to exploit the parasites, thus becoming a kind of hyper-parasite.

    3. Evolving programs deemed by humans to be impossible
    4. The humans developing TIERRA worked out the original code for the entities to reproduce. In the process, they determined the minimum size that a program could be and still enable reproduction; an entity with a smaller program simply could not reproduced and would eventually die an evolutionary dead end. But then some entities developed program techniques that the humans had never dreamed of, had thought to be impossible. With these novel techniques, entities with programs maybe half the size of the "smallest possible working program" were able to reproduce and also, as I recall, make more efficient use of resources. When you read the documentation, look for "unrolling the loop."
    Remember that those properties evolved on their own and were not planned by the human experimenters in any fashion.


    Share and enjoy!

    Return to DWise1's Creation/Evolution Links Page
    Return to DWise1's Creation/Evolution Home Page

    Contact me.


    First uploaded on 1997 July 02.
    Updated on 2017 January 20.