Tuesday, August 18, 2009

Robots evolve to deceive -- how could they evolve to cooperate?

Note: I am pretty sure I use the term "Bayesian network" completely incorrectly throughout this entire post. Just pretend I used the more general term "neural network". Or, if that's wrong too, just pretend I said whatever thing that wouldn't make me sound stupid and pretentious.

I have this vision in my head from school days of a bunch of inputs and outputs connected by some number of intermediate nodes that do a weighted sum of the inputs -- with the weights being the coefficiencts that change over time, whether by "training" the network or via genetic algorithm. Whatever that thing is called, that's what I meant. (That'll teach me to try and talk authoritatively about stuff I learned eight years ago and never applied...)


The scienceblog-o-sphere is mildly abuzz with a fun new study about Bayesian-network controlled robots evolving the ability to deceive each other. Click the link for details, but to very briefly summarize: The robots can only detect a "food" source at very close range, but they also have the ability to flash a light which the other robots can detect at a distance. They start off flashing randomly and searching randomly. Within a few generations, some robots evolved an attraction to the flashing lights, since it might indicate the discovery of a food source. Within a few more generations, though, many of the robots started to control when they flashed the light so as to "remain silent" when in proximity to a food source, thereby preventing others from discovering it and crowding them out for resources.

Neat stuff. I can't help but think the same thing probably could have been done with a computer simulation, but doing it with robots is just so cool. (It probably also has the advantage of removing a number of possible experimenter biases in the computational model of how to represent movement, sight, overcrowding, etc., since you can just let the laws of the physical world handle it.)

This got me to thinking, how would the conditions have to be different in order for the robots to evolve cooperative behavior? Could the conditions of the experiment be modified so that, after enough generations, some robots intentionally signaled the presence of a food source to their peers?

It is very difficult to see this happening in the present setup, where all the robots can see the "signal" emitted by all the other robots. A "gene"1 would have no selfish reason to alert its peers to the presence of food, because there is no mechanism within that experimental framework to signal robots who shared the gene at a higher frequency than those who did not share the gene.2

But what if we change the experiment to involve several different "species" of robots. Some of them emit blue light, some green light, some red light, etc. Furthermore, a robot can only see the color of light it itself emits, i.e. it can only "see" the signals of members of the same species. The species all compete for food in the same common area, and the selection criteria applies cross-species, i.e. survival is awarded to the 200 fittest robots regardless of species. "Mating", on the other hand, is constrained to within species.

Now let's assume that "somehow or other" -- and I'll come back to this, because it's quite a large bit of hand-waving -- one of the species develops a significant sub-population with the "cooperative" gene. Call it the Green species, for sake of convenience. Robots of the Green species would, on average, be alerted to food sources much quicker than robots of other species, and so ought to tend to outperform them in each round. While the within-species selection would be against the "cooperative" gene, it might in the short term tend to out-reproduce the various genes of other species simply by virtue of random mating with more selfish Greens. In other words, a sub-population of selfish Blues might be more individually fit than cooperative Greens, but the selfish Greens are crowding out the selfish Blues and forcing them to disappear. Meanwhile, the few cooperative Greens that do manage to survive could continue to sprinkle the "cooperative" gene amongst the burgeoning Green population.

I think this might be a bit fanciful in that form. It doesn't sound particular sustainable to me, and more damning is that I can think of no evolutionary process by which the Greens would develop a significant sub-population of cooperatives to begin with. (This is the frantic hand-waving I referred to earlier) If a Green mutated to show cooperative tendencies, the selective pressures would be strongly against that mutant. The species-wide benefit would have to be immense in order to facilitate the by-chance propagation of a gene that was so clearly bad for the individual, but a large species-wide benefit would not be able to develop until the "cooperative" gene constituted a sizeable sub-population. It seems to me this is an unsurmountable can't-get-there-from-here problem.

Unless the inheritance were somehow tweaked to behave in a Mendelian fashion, with the "cooperative" gene being a recessive trait. Now I think maybe we are getting somewhere. A "cooperative carrier" can still behave selfishly, but it also benefits from the fact that 1/4 of its brethren are pure cooperatives. There is still a bit of a jump start here, because cooperative carrier Greens are, within the species itself, no better off than pure selfish Greens. But if one species were to develop even a relatively small population of cooperatives and cooperative carriers, that species could suddenly start to significantly outperform other species. The selective benefit to the cooperative carrier is clear: it surrounds itself with cooperatives that it can exploit in order to out-compete other species.

Hmmm, this makes me want to try to duplicate the experimenters' results in software and then start playing with the model... Of course, the fact that I've never directly worked with Bayesian networks or genetic algorithms could put a damper on that...

1Depending on the experimenters' exact implementation of the neural network and the inheritance scheme, it is probably not quite accurate to use the word "gene" here. At this time, by "gene" I am simply referring to some group of coefficients in the Bayesian network which produce a particular behavior. I am also going to pretend that this "gene" is atomically heritable, for purposes of this thought experiment, even though that may not reflect the experiment at hand.

2We can maybe envision a group of robots which devised a special "code", i.e. a series of pulses of light, that signified the presence of food only to others who were genetically "in the know", but this is fraught with all sorts of difficulties. For one, even a brief code would likely attract robots who did not comprehend the code. For another, robots who did not have the "cooperative" gene could still develop the capability to understand the code and exploit the cooperative robots. Perhaps most damningly, the development of this code is too complex to evolve all at once, and looking only within the context of food-signalling, there is no evolutionary advantage to a partial or weakly-comprehended code. In order to surmount this, the robots would have had to evolve a coded communication ability for some other purpose, which was then co-opted into food signalling -- and that is going way beyond the scope of even this quite fanciful thought experiment.

5 comments:

  1. If the next generation was weak and numerous, so that most died of starvation before becoming fully functional adults able to forage successfully, wouldn't that increase the generational benefit of a "gene" for the "altruistic" behavior of feeding offspring? If that could happen, I'd think that it should spread through the gene pool fairly quickly. Once parent-child (one or both parents) "altruism" became the norm, it might be easier to get a mutation for cooperation between the parents to feed their offspring, or even for the object of the altruism to accidentally extend from the child to others "close" to it. Mutual "altruism" would be the same as "cooperation." IANAbiologist, but that seems a more reasonable path to cooperation than just trying to get independent adults to cooperate from scratch. Or maybe that could help fill in your "somehow or other."

    ReplyDelete
  2. So now this opens a whole other can of worms... (or can of robots?) In the original experimental setup, as I understand it, there is a discrete transition from generation to generation, i.e. you don't have units from generation N coexisting with units from generation N+1. I kept thinking the selective benefit of a partially altruistic population might be more powerful if the generations bled into one another, because now there is an added benefit to "cooperative carriers" in that they are directly likely to be surrounded by more "cooperatives" later in life. That could be intensified still more if reproduction was geographically proximal, i.e. when a new unit is produced, its initial location is in the same area as the parent(s).

    I hadn't even thought about what would happen if the robots progressed over time... Actually, I'm not sure anybody has done this with genetic simulations, since it's not obvious how to combine it with traditional Bayesian networks. I suppose one possibility would be to have a unit's "phenotype", in terms of the coefficients used in the Bayesian network, start out with a random drift of a pre-programmed magnitude, i.e. you start out with all of the Bayesian coefficients randomly distributed within +/-10% of what the "genome" specifies, and then the coefficients converge on the genome as the unit "ages".

    Of course that's not the same as normal animal development (or plant development, for that matter) but I wonder if it would be a useful model.

    Man, I steered away from neural networks in grad school because I thought (and still think, to be honest) that they aren't a whole lot more than a computational novelty. Maybe useful in very specific circumstances, but generally speaking it seems to me much more logical to just develop an algorithm yourself to do what you want, rather than "train" a Bayesian network to do it. (Genetic algorithms clearly have computational value, but genetic algorithms do not need to be based on neural networks, and in fact often aren't) But now I'm thinking maybe I should have gotten into it more. Ah well...

    ReplyDelete
  3. If one of the inputs to the Bayesian network was "time organism has been alive" then I suppose you could theoretically evolve metamorphic units, rather than having to impose some sort of hackneyed external rule, like the gradual shrinking phenotypic drift I proposed earlier. Still, though, this is not quite the same thing as a child growing into an adult, I don't think... while some of the traits that differ between immature and mature organisms are surely an intentional genetic result, I imagine a number of them are also due to the practical physical concerns of building an adult organism out of biological building blocks, with limited resources.

    I don't see any simple way to model that in a Bayesian network. If there were some way to classify certain coefficient values as "energy intensive", then we could imagine setting up the model so that organisms cannot manifest the energy intensive coefficients in their genes until after a certain amount of time/consuming a certain amount of "food". But how would you determine the coefficient values were "energy intensive"? Hmmm...

    ReplyDelete
  4. I don't know a thing about Bayesian networks, and didn't read the original article, so my mental picture was just little tiny R2D2-like "mice" running around with multiple generations existing at once and all of them looking for "cheese" or batteries or electric outlets or whatever. Hah! Here comes the cat!

    ReplyDelete
  5. heh, well, after my last comment I went and revisited some of this stuff on Wikipedia and I think I've been seriously abusing the terminology here anyway. "Bayesian network" apparently means something way more specific than I thought it did... So perhaps we are both equally clueless :)

    ReplyDelete