Genetic Robots

August 20, 2006

Many years ago, I think in 1999, Jen found Soda Constructor, an on-line game that lets you build a robot as a mass-spring system. You can then run the robot in a small two-dimensional room. The springs can be marked as muscles, meaning that they contract periodically. You can set the phase of the contraction to make different muscles contract at different times. I wanted to evolve such robots automatically, so I wrote a program to do so using a genetic algorithm. (I’ve since learned that my algorithm was in fact evolutionary but not genetic because it didn’t use sexual reproduction.)

My goal was to evolve a robot that had multiple muscles, where the muscles worked together to pull the robot along the ground. As I’ve since learned, genetic algorithms are good at satisfying your stated goals by cheating, and not by creating the solution you had in mind. My original fitness function measured how well the robot went from one side of the room to the other. The program evolved one tall stick that fell over:

I really wanted the robot to go back and forth in the room, not just fall over, so I modified the fitness function to measure how quickly a robot could touch one wall and then the other. This resulted in a very wide robot that collapsed, first touching the right wall and then touching the left:

I then decided to remove the walls altogether, since my goal was a walking robot and the walls were just adding ways that the system could cheat. I modified the fitness function to measure how far the robot could get in a certain amount of time. The program quickly evolved a pogo stick:

I tried various things to avoid this simplistic solution, like varying the value of gravity for each generation. (The pogo stick depends on a certain value in order to bounce just right.) I still couldn’t get away from the pogo stick. It was extremely resilient. I finally found a bug in my code that was causing the top end of the pogo stick (which had a much larger evolved mass) to fall more slowly due to gravity. I fixed this and got this nice leaning triangle:

In fact after fixing the bug there was nothing I could do to get a simple pogo stick back. The pogo stick must have depended on this bug in a way that the leaning triangle didn’t. Still, I wasn’t very happy with the leaning triangle since it wasn’t very practical. It wasn’t the kind of robot that could reverse direction or stop and then continue. But then of course I hadn’t put that into the fitness function, either. So I brought the walls back, and set it up so that if the machine touched a wall, the phase would get inverted, allowing a robot to go the other way. (This is how Soda Constructor works too.) This brought me back to the pogo stick, which (with the mass bug fixed) was unstable and just fell over. The machine just evolved to start as far to the right as possible and end up flat on the left:

I then decided to ignore the first bit of the simulation. For the first few generations I would consider the entire distance travelled, but as the generations went on I ignored up to the first half of the distance travelled. Since the pogo stick, as an unstable machine, spent the second half of its simulation flat on the ground, it allowed other kinds of machine to develop. I finally got the walking machine I had wanted from the start:

A few lessons from the experiment:

  1. If you have a specific result in mind, chance are that the system won’t evolve that result. It will instead evolve an easier solution that meets your stated goals without meeting your unstated goals.
  2. It’s difficult to find bugs in your code. In my case I forgot to take mass into account when calculating force due to gravity, and this was still able to evolve interesting robots. It’s only because in one case I saw the robot spin as it fell that I realized there was something wrong. For all I know there are still bugs in it.
  3. You really want a tool that asks the system, “Why didn’t you do this other thing?” In the tall triangle case above, why does it start out vertical then fall over? It would go further (in the time alloted) if it started already leaning over. I’ve seen that same problem when writing chess-playing programs, where the system makes a move and you want to ask it why it didn’t make this other seemingly better move.
  4. Keep all your data files and all the versions of your code. With all the randomness involved in a program like this, you’ll have a hard time duplicating anything cool (or historically interesting) that you find. The first two animations above were manually reconstructed because I couldn’t get the program to evolve them again, seven years later.
  5. Vary unimportant parameters, like gravity, from generation to generation so that machines don’t cheat by depending on those parameters. Ignore the first bit of a machine’s life so that it can’t do something simple initially. The real problem with the simple cheating machines is that it makes it very difficult for more interesting (but initially slow) machines to compete. Had I done this with my first results (tall stick falling over) I might have found some interesting things right away. Instead I tried to micro-manage the goals, which led to just other kinds of cheating. Another possible solution (which I didn’t try) is to only pay attention to a section of the creature’s life. For example, in this case, measure the distance travelled over 10% of the robot’s life, and change which 10% with every generation.

Now I want to go further and evolve robots that either sexually reproduce (genetic algorithm) or have a program that evolves (genetic programming). I also want to pit the robots against one another, maybe make some the prey and some the predator. That way I don’t have to come up with the fitness metric, that’ll take care of itself. My guess is that my biggest problem will be mass extinction.

~ See all projects ~