Note: I am pretty sure I use the term "Bayesian network" completely incorrectly throughout this entire post. Just pretend I used the more general term "neural network". Or, if that's wrong too, just pretend I said whatever thing that wouldn't make me sound stupid and pretentious.
I have this vision in my head from school days of a bunch of inputs and outputs connected by some number of intermediate nodes that do a weighted sum of the inputs -- with the weights being the coefficiencts that change over time, whether by "training" the network or via genetic algorithm. Whatever that thing is called, that's what I meant. (That'll teach me to try and talk authoritatively about stuff I learned eight years ago and never applied...)
The scienceblog-o-sphere is mildly abuzz with a fun new study about Bayesian-network controlled robots evolving the ability to deceive each other. Click the link for details, but to very briefly summarize: The robots can only detect a "food" source at very close range, but they also have the ability to flash a light which the other robots can detect at a distance. They start off flashing randomly and searching randomly. Within a few generations, some robots evolved an attraction to the flashing lights, since it might indicate the discovery of a food source. Within a few more generations, though, many of the robots started to control when they flashed the light so as to "remain silent" when in proximity to a food source, thereby preventing others from discovering it and crowding them out for resources.
Neat stuff. I can't help but think the same thing probably could have been done with a computer simulation, but doing it with robots is just so cool. (It probably also has the advantage of removing a number of possible experimenter biases in the computational model of how to represent movement, sight, overcrowding, etc., since you can just let the laws of the physical world handle it.)
This got me to thinking, how would the conditions have to be different in order for the robots to evolve cooperative behavior? Could the conditions of the experiment be modified so that, after enough generations, some robots intentionally signaled the presence of a food source to their peers?
It is very difficult to see this happening in the present setup, where all the robots can see the "signal" emitted by all the other robots. A "gene"1 would have no selfish reason to alert its peers to the presence of food, because there is no mechanism within that experimental framework to signal robots who shared the gene at a higher frequency than those who did not share the gene.2
But what if we change the experiment to involve several different "species" of robots. Some of them emit blue light, some green light, some red light, etc. Furthermore, a robot can only see the color of light it itself emits, i.e. it can only "see" the signals of members of the same species. The species all compete for food in the same common area, and the selection criteria applies cross-species, i.e. survival is awarded to the 200 fittest robots regardless of species. "Mating", on the other hand, is constrained to within species.
Now let's assume that "somehow or other" -- and I'll come back to this, because it's quite a large bit of hand-waving -- one of the species develops a significant sub-population with the "cooperative" gene. Call it the Green species, for sake of convenience. Robots of the Green species would, on average, be alerted to food sources much quicker than robots of other species, and so ought to tend to outperform them in each round. While the within-species selection would be against the "cooperative" gene, it might in the short term tend to out-reproduce the various genes of other species simply by virtue of random mating with more selfish Greens. In other words, a sub-population of selfish Blues might be more individually fit than cooperative Greens, but the selfish Greens are crowding out the selfish Blues and forcing them to disappear. Meanwhile, the few cooperative Greens that do manage to survive could continue to sprinkle the "cooperative" gene amongst the burgeoning Green population.
I think this might be a bit fanciful in that form. It doesn't sound particular sustainable to me, and more damning is that I can think of no evolutionary process by which the Greens would develop a significant sub-population of cooperatives to begin with. (This is the frantic hand-waving I referred to earlier) If a Green mutated to show cooperative tendencies, the selective pressures would be strongly against that mutant. The species-wide benefit would have to be immense in order to facilitate the by-chance propagation of a gene that was so clearly bad for the individual, but a large species-wide benefit would not be able to develop until the "cooperative" gene constituted a sizeable sub-population. It seems to me this is an unsurmountable can't-get-there-from-here problem.
Unless the inheritance were somehow tweaked to behave in a Mendelian fashion, with the "cooperative" gene being a recessive trait. Now I think maybe we are getting somewhere. A "cooperative carrier" can still behave selfishly, but it also benefits from the fact that 1/4 of its brethren are pure cooperatives. There is still a bit of a jump start here, because cooperative carrier Greens are, within the species itself, no better off than pure selfish Greens. But if one species were to develop even a relatively small population of cooperatives and cooperative carriers, that species could suddenly start to significantly outperform other species. The selective benefit to the cooperative carrier is clear: it surrounds itself with cooperatives that it can exploit in order to out-compete other species.
Hmmm, this makes me want to try to duplicate the experimenters' results in software and then start playing with the model... Of course, the fact that I've never directly worked with Bayesian networks or genetic algorithms could put a damper on that...
1Depending on the experimenters' exact implementation of the neural network and the inheritance scheme, it is probably not quite accurate to use the word "gene" here. At this time, by "gene" I am simply referring to some group of coefficients in the Bayesian network which produce a particular behavior. I am also going to pretend that this "gene" is atomically heritable, for purposes of this thought experiment, even though that may not reflect the experiment at hand.
2We can maybe envision a group of robots which devised a special "code", i.e. a series of pulses of light, that signified the presence of food only to others who were genetically "in the know", but this is fraught with all sorts of difficulties. For one, even a brief code would likely attract robots who did not comprehend the code. For another, robots who did not have the "cooperative" gene could still develop the capability to understand the code and exploit the cooperative robots. Perhaps most damningly, the development of this code is too complex to evolve all at once, and looking only within the context of food-signalling, there is no evolutionary advantage to a partial or weakly-comprehended code. In order to surmount this, the robots would have had to evolve a coded communication ability for some other purpose, which was then co-opted into food signalling -- and that is going way beyond the scope of even this quite fanciful thought experiment.
The confusing world of nutrition
41 minutes ago