In the processes we've discussed so far, we have largely avoided the notion of chance as anything but a blurring factor of purely deterministic notions. Persistent genotype-based advantage will, depending on how strong the difference is, often proliferate even in the face of chance (genetic drift). Theoretically, the fate depends (statistically) on the relative strength of the selective advantage of one state over its competitors, and the size of the population (which affects the chance aspects of reproduction). In this situation, the success of the favored is to some extent, if not completely, largely like that of a steady force. In fact, the models that show this largely assume a steady state (e.g., similar selective advantage of the allele in question over long time periods). It is important in this theory, which is mathematical and not questioned, that selectively neutral or even somewhat harmful variants can also, if with less probability, proliferate at the expense of competing variants. Overall, however, one can fiddle with the details of such a model to make of drift a fly, but not a fatal flaw, in the classical selectionist ointment.
But let us see how far we can go by considering that context changes all the time, sometimes more, sometimes less. And if context changes all the time, selective value will change. That makes things much less predictive over very long time periods, if these fitness differences are small. We have to note this because as entrenched as deterministic selection is in the theory of evolution, the idea that evolution creeps along at usually a literally imperceptibly slow pace is equally entrenched.
What if we assume that there is no natural selection relative to some trait we wish to follow? To what extent could chance (drift) alone lead to adaptive traits? We illustrate this notion with an example of we called receptor-mediated evolution in Chapter 10 of our book The Mermaid's Tale. We schematically illustrate the evolution of a complex trait without any natural selection. The legend is below the figure:
A. Free-floating cells with environment-sensing surface proteins. B, C. These experience mutation that make them able to adhere to copies of the same receptor on other cells (mutation-bearing cells are differentially shaded). This leads to aggregations of cells. We can call it an ‘organism’ at some stage. D, E. Receptor-based aggregates sequester cells of specific mutational types or ‘species.’ The cells within a cluster can differentiate by gene expression, depending on whether they detect contact with the outer environment or not, forming specialized subsets (eventually leading to organs). F. A cluster can shed individual cells that will then divide to form new clusters of the same kind as their parents. G. Mutations leading to the release of just the extracellular part of the receptor can bind to related cells elsewhere, triggering them to differentiate into new clusters—an early form of signaling.
Starting small and in some local area, there need be no serious competition for resources and hence no natural selection against (or for) the modified receptors that evolve by mutation in this little story. Cells with like properties encounter each other randomly or locally because that's where they were formed.
This is of course schematic and hypothetical. But it shows, we think, that complexity can in principle arise slowly, element by element, without the need for competition for resources or overpopulation and so on. What is required is that over time in some location, a variety of mutations arise among countless individuals (here, starting with cells). Unless or until they do arise, evolution doesn't of course occur! This doesn't preclude the new evolving forms experiencing selection in some way or at some time, but the point is that it need not be a necessary part of the dynamics. If chance combinations of non-harmful genotypes arise, and environments change, or a randomly arisen combination happens now to offer a viable function, can that continue to improve (very slowly) over time?
When, whether, where, or how often this sort of phenomenon accounts for adaptive change, very slowly and locally, is a matter to think about and perhaps there would be ways to test its credibility. The slower evolution works, the greater is the plausibility that such phenomena can be a part of adaptive evolution--by drift and without the need for natural selection.
Drift? Maybe--but is it, too, a mythological concept?
We have argued that chance in the form of what is called genetic drift must play a role in evolution. The course of evolution involves elements of competition but inevitably also of chance. Chance has at least two relevant meanings here.
First, we might say that two foxes have somewhat different bodies, but are the same when it comes to catching rabbits. The chance of a successful chase is the same.
Second, genetic mutations in DNA sequence certainly happen sometimes by what is essentially chance: a cosmic ray from the sun zaps your DNA somewhere in a way totally unpredictable and, most important to evolution, that has no relationship to a trait that may affect the fitness of the victim. When it comes to reproductive success, there is no selective difference between the new and competing existing genotypes. For each genotype, the chance of reproducing is the same.
Now, how can we tell if the two foxes, or the two genotypes, have the 'same chance' of success? What does 'same' mean here and how on earth could we possibly tell?
In this sense, one can never prove, essentially not even in principle, that two functional states are identical--that the difference is exactly zero. By this criterion even drift becomes a mythological if not mystical notion. Or you can take the position of a physicist who believes in deterministic laws of nature, then certainly at some level, even if you can't see it, there is a fitness difference. But as we have seen repeatedly in this series, that then is an assumption and defines all evolutionary change as being due to natural selection: If it survived it was selected for, end of story.
That is, as we have repeatedly said, a definition not a scientific statement. But there's more than that.
Too small to detect, yet treated as if so important?
Let us suppose for the sake of argument that selection of a purely deterministic sort (steady, fixed selective difference between alternatives, no chance element, etc.) is taking place. Let's say one state has a 1% advantage over its competitor. Does this sound small? Well, for evolutionarily relevant natural selection that would be considered quite unusually strong (remember, we're not here discussing artificial selection or selections such as antibiotic resistance which can be extremely strong). Most selection in real-live Nature is probably at least ten times weaker--differences on the order of one part in a thousand or less.
But let's stick with the strong 1% advantage. That means that if you have 101 offspring to my mere 100. Here again we're letting it be deterministic not just a long-term average over a species' populations and countless generations. Such a difference would be exceedingly difficult, or sometimes statistically impossible, to document from actual samples of completed fitness in a natural population. Even if such a difference persisted for the thousands or more generations required for a major trait adaptation, it could not reliably be estimated at any given time, and that means for every given time (because even in generations when you did detect it, by some statistical criterion, you could not reliably know that this wasn't a fluke of sampling.
This in no way implies that slow, even steady and deterministic selection, the usual image, does not occur. But it does mean that the image of great advantage in the raw competition of Nature is an exaggeration of large proportions. It essentially gives an image that equates the carnage in the backyard as mainly about adaptive selection, rather than mainly about just plain carnage of everyone seeking its dinner.
Here the importance has serious implications for humans and those who seek Darwinian explanations for every little human trait, physical or especially behavioral. The fact is that adaptive differences are essentially so trivial that they have no real import at any given time. That is, they should not be used as tools to justify discrimination or inequality and so on. How and whether such policies based on different traits, talents, and the like should be implemented cannot usually be justified on evolutionary grounds. Evolutionary grounds are about net reproductive success, not human cultural values (in another post we noted that contrary scenarios from the usual are easy to construct!).
Tempering excessive invocation about natural selection is an important reason that we decided to do this long series about the nature of evolutionary adaptations and change. That is because we see a sometimes rather fervent eagerness to revive evolutionary value judgments about people as individuals or as labeled groups.
Of course, many genetic variants lead to serious disease and clearly may impair reproductive success in a huge way. But we treat disease for its own sake, not because of its evolutionary import. Mixing sociological judgments with evolutionary theory is to dabble in the Devil's game, as history has shown.
If a trait is so adaptively important, why is so much variation still around?
If selection were strong because a trait were being refined or fine-tuned by selection, and selective differences were strong enough to make a contemporary mountain rather than molehill of, why is there then still so much variation? Why hasn't selection made everyone almost alike?
For example, if intelligence (as in IQ scores, say) were so vital to the human place in Nature, why is there such a range between the very smart and the very not-so? Here, we are not referring to pathological mental impairment.
The answer is that either trait differences aren't actually that relevant to evolution--that is make little difference to net reproductive success, or there is a balance between lowered reproduction due to selection, the blurring effects of chance, and the input of new variation by mutation and recombination. This would probably be the preferred explanation for most theoretical population geneticists. But if true, it implies that selection really is not that strong after all, possibly because so many genes are contributing to the trait that individual differences simply cannot be tightly purged by selection. Maybe this is in part because the many genes each have other roles to play as well, and can't be purged too tightly without affecting those traits.
Further, even if selection of a classical deterministic kind is at work, it could be that only the very fastest relative to the slowest fox has an advantage. That will move the average chasing speed of foxes towards being faster, but other than the rare outliers, there need be no fitness-related differences. This again is a very different idea from that of eagle-eyed, ever-vigilant, fine-tuned selection.
One should also realize that it isn't that foxes today are somehow more 'fit' than their distant ancestors were--that they have been struggling for eons of doing poorly to evolve being OK today. At every age they were, as a population perfectly fit, for that particular time, as one would say if one were observing them at the time.
There are many issues here, but the bottom line is that at the level of individual genes, and probably at the trait level itself, selection is just not very precise and/or that the species does perfectly well with its broad trait variation. Again, too big of a deal should not be made, on evolutionary grounds, for the range of differences we see.
We risk reductio ad absurdum by taking too strong a stand in any direction when it comes to the evolution of complex traits. In any discussion of evolutionary factors, call them what you will, we face a major challenge in determining the reason for evolutionary success--or even, one might say, the meaning of 'reason' in this context. We are stuck in a profound way with statistical statements based on empirical, and inherently limited samples, imperfect measurements, unobservable past events, and essentially subjective testing and decision-making criterion (a subject we've discussed before).
Tiny differences, be they 'due' to chance or some very weak force, can be imperceptible by such criteria but can accumulate. They can lead over eons to something useful, and even if now and then nudged by other forms of selection, differential proliferation can occur essentially by what is reasonable to call chance.
Gene duplication is a form of drift that in principle can lead to redundancy that can buffer the organism against future mutations that generate function in one of the copies. Most of our genomes have arisen, from early days, via duplication and rearrangement of existing bits of DNA (exon shuffling, in exact recombination, translocations, transpositions and the like). Even standard genome evolutionary theory and explanations recognize this. Relative to future function, gene duplication is a form of random event, like point mutation. But duplication of existing functional elements provides a potential source of new function, usually related to current function--some of which can serve as fortunate 'pre-adaptations' for the organism's niche at that or later times.
Even more than that, as I have recently discussed in one of my regular column installments in Evolutionary Anthropology*** (with references to others' work), a random DNA sequence long enough to code for a protein of a respectable 50 or more amino acids is not trivial, if mutation or translocation or duplication generated a promoter sequence. Since all nucleotide triplets can be used as codons, and only 3 of 64 possible codons are STOPs and there are 6 possible reading frames, etc. Other elements (polyA site, ATG, etc.) may also be needed, but genomes are big, organism numbers huge, and earth history very very long long. What is transcribed need not be translated to be functional, as in the plethora of noncoding RNAs. And the protein need not have a function right away so long as it doesn't get in the way of what a cell is doing.
A function can arise later. Or not. In the long history of life and the diverse functions, choices, and opportunities of species, the various forms of adaptive response discussed in this series may apply even to essentially randomly arisen genes. When we dismiss anything but classical natural selection as our explanation, we close off other possible accounts for traits that we see here today.
We hope, at least, to have provided some food for thought on this fundamental aspect of causation in life and its genomes.
For discussions of ways chance and selection can mold what we see in genomes, work by Michael Lynch makes good reading (e.g., The frailty of adaptive hypotheses for the origins of organismal complexity, PNAS, 2007 and his book The Origin of Genome Architecture); of course, he may not agree with what we say here.
***Weiss, K Little Orphan's Nanny: Where do genes come from and who takes care of them? Evol. Anthropol. 22: 4-8, 2013 (paywalled--email me for pdf)