Monday, April 30, 2012

Metaphysics in science, Part II: Life in a cave

'Metaphysics' is an area of philosophy that goes back to the ancients.  The word has many meanings, but for our purposes it is a contrast between the hard-nosed reality of the world we can see, touch, and feel, and the world of ideas in our heads.

The archetype of metaphysics, that illustrates the basic idea, goes back to Plato and that's what we'll use in this series of posts.  In his Republic, he likened the world we live in to a cave, in which we sit facing the wall, able only to know of what we see there, which are but shadows of reality.  That's because there is a 'real' world outside the mouth of the cave.  Light shining into the cave produces shadows of the objects in that world onto the wall of the cave.  All the encaved people can see are these shadows, ephemeral indications of the True things but that the people cannot actually see.

So, in this metaphysical view we see chairs, dogs, good and evil only as imperfect shadows of their true ('ideal') nature.  We assume these ideals, but have to infer them via their imperfect representations.

Other philosophers objected to this idealism, saying that the idea of the actual existence of these metaphysical entities like 'dog' or 'good' were mistaken, and that what we actually see is all there is.  For our purposes we can refer to this purely empirical view as Aristotelian.  There may be chairs and dogs, but there is no 'the chair' or 'the dog'!

In the early part of the modern Age of Science, RenĂ© Descartes suggested a fundamental dualism between mind and matter, that can be seen as an extension of these differences.  There was matter--the real stuff we can grasp and measure and poke electrodes into, and then there was something resident, but immaterial in our heads (call it mind, soul, or whatever).

Science has become hard-nosed about reality: it is only what we can touch, feel, or measure physically.  We know that consciousness is elusive to our present understanding, but believe (or, at least, operationally act as if) mind is just the 'emergent' result of our neurons working by the billions in the confined physical space of our heads.

More importantly, in the Age of Science it has become routine to sneer at metaphysics.  Science, perhaps arrogantly, perhaps accurately, views the testable empirical world as the only world.  It wants to disabuse us of abstract, undemonstrable soft-headed metaphysical notions.

But let's take a look at some of the issues from a biological viewpoint 
After Mendel, the idea of 'genes' developed.  Genes (at the time inferred from parent-offspring resemblance, as in Mendel's peas) were assumed to exist, but we didn't know what they were.  Darwin's idea of evolution invoked these entities (he didn't use the term itself, which was coined much later).  The idea was of things transferred in Mendelian patterns; we had the audacity to name them, though we had no idea what they actually were!

But other schools of thought, Marxism in particular, strongly resisted such notions as being metaphysical:  we say there are genes, but we never actually see one, so they really are like Plato's shadows in the cave.  And as materialists we had long ago abandoned such infantile thought-centered views of reality.  Instead, they argued, genes were metaphysical entities that are invoked by the capitalist world to justify ruthless Darwinian competition and cruel inequality, which thus justified social inequity.

Marxists (in the Soviet Union, and in the particular incarnation of the agricultural policy under a  plant biologist named Trofem Lysenko) argued that individuals are not condemned by their metaphysical 'genes', and can improve themselves, and incorporate (that is, literally build into their material nature) their improvements. These improvements can then be transmitted to their progeny: real gains are thus made without invoking laws of competition and the like, which they argued were just convenient metaphysical entities.

It turned out, of course, that the hypothesized 'genes' really do exist as material, molecular entities.  Good-bye metaphysical notions!

But is this really as clear-cut as all that?  And does it matter?  Do we really live, or give the lie to, our materialism?  Are we true to our sneering at metaphysics?  The answer, even for hard-nosed science, is at least not totally clear.  Let's look at genes first, and then evolution.

Genes:  Platonic shadows of nonexistent ideals?
There are countless references to genes these days.  Genes have names.  So, for example, there is the 'beta globin' gene.  'The' normal variant of 'the' globin gene is called HbA.  But this gene is the one that, in one of its variant forms (called HbS), confers sickle-cell anemia (a red blood cell disorder) on those who carry the HbS  form.  We can talk, seemingly sensibly, about 'the globin gene.'  But what is it?  Indeed, is there a globin gene?

In fact, the globin gene is a Platonic ideal if there ever was one.  What we each have are Aristotelian manifestations of what we call 'the' globin gene.  Like chairs or dogs, they can be different in each person (as in the HbS), but there is no 'the' globin gene.

The same goes for the driving force of so much of today's biological science:  'the human genome': Does it exist?  We think the answer is, manifestly not!

The human genome is a phrase that today refers to a single reference sequence of human DNA, available for all to see online (well, you can't actually see it, but you can scroll your screen along its sequence of A,C,G, and T nucleotides, 3+ billion of them).  We have agreed to accept this as 'the' human genome.  But it in reality is neither 'the' human genome sequence nor even 'a' human genome sequence!  In fact, it's the sequence of bits of DNA from several people (each of whom had two copies, one inherited from each of their parents).  Nobody has, and nobody ever has had, 'the human genome' sequence.  And there is debate about whether this is an appropriate abstraction.  For example, the donor humans were 'normal' when their DNA was sampled, but undoubtedly will die of something, that may be affected by their genes.  So it's not a 'normal' genome in any serious sense; it was not randomly sampled from our species.  So some have suggested that we should collect a set of sequences as references, or use some other kind of abstraction that incorporates known sequence variation in humans.  That, however, would still be a judgment about what to consider as our Platonic abstraction representing the DNA that we each carry around.

It is no secret (and no problem, either) that the idea of a reference sequence is a baseline by which we compare other human DNA sequences to identify and examine differences, variation, and the like.  That's a convention, such as that a red light means 'Stop'.  If all we see are actual instances of human DNA, then there is no 'the human genome sequence' any more than there is 'the chair'.  So we formally accept the notion of a Platonic abstraction, and we think every scientist would argue that this is just a pragmatic abstraction that makes our daily work possible, anchoring it so we can be objective about human genetics.  There isn't any particular problem about its metaphysical nature.

It's something like pi, the ratio of a radius of a circle to its circumference, or the square root of 2.  These numbers don't actually exist  in a sense (their decimals go on forever), but they are immensely useful and their 'results' are seen all over the place.  For that matter, there isn't a perfect 'circle', either: only its Platonic ideal.

The Marxist allegation that our use of 'gene' was metaphysics was right in a sense, but it was not 'just' metaphysics, and has proved to be correct in the sense that real genes do exist as empirical, material, molecules we can study, that are transmitted, that change, and that affect our traits.  The guess that inherited elements exist was based on Mendel's work and much else, and the idea of a 'gene' was a practical guide used very effectively to find the actual manifestations of the idea.  We were able systematically to do this because research consistently homed in on manifestations of this guiding, or one can say, reference ideal.

Evolutionary theory: what sort of reality is it?
Well, if we can see the issues in terms of material things like genes, what about less clear things like 'good'?  Does 'good' exist in any ideal sense, other than in various manifestation that, indeed, must be judged individually by individuals, and hence may not really have actual existence as entities?  Of course, philosophers make their living dealing with such questions, but we can turn to another manifestation (so to speak) of this kind of question: scientific theory.  Let's take Darwin and evolutionary theory.

Here we are dealing with processes, not things.  Species exist, again if we recognize that we give labels to Platonic ideals (e.g., 'type specimens'), and that the idealism is just a path towards understanding the real world that does exist.

What is evolution, or for example, natural selection?  Are these abstract concepts that exist on their own, or are they instances?  If the latter, what are they instances of ? What is the standing of a theory?  In particular, if the chair you're sitting on is, after all, a chair, one would have no objection to our speaking about chairs generally as if the Platonic ideal existed.  That is, the assumption that chairness exists is not particularly worrisome.

But what if a suggested ideal in this abstract sense is a theory?  One might say that the evolution of antibiotic resistance by bacteria is an instance of natural selection.  But what we want to know is whether 'natural selection' is somehow 'out there' as an ideal, overarching truth, or instead is just a term we give when we observe something happening.  This is not a trivial question.  That's because we know not everything is a chair, or dog, so we have no difficulty dealing with symbolic abstractions like 'chair' and 'dog': it doesn't makes us imagine that there is a true, ideal 'dog'.

But a theory is supposed to be universal, ubiquitous.  For example there is a big difference between saying 'some times natural selection produces adaptations'  and asserting that 'adaptations are due to natural selection.'

Here a way to see that there's a problem.  If a theory represents a 'law of nature' for example, one can ask whether that very idea invokes Platonic realities.  We don't just want to explain an apple falling on Newton's head as 'gravity was involved in this instance', as if other apples need not fall on heads.  We want gravity to be unexceptioned, universal, out there in the distant galaxies beyond our telescopes, and inside our cells: 'out there' in some real not just imagined sense.  And what about the evidence for a theory when it exists.....or doesn't?

These are important issues of practical value, and were at the core of modern ideas about science, including the idea of induction and replication: repeat observations show the general nature of something, and the idea of deduction: with a theory we can reliably predict consequences if we observe causes.

We will continue to explore this in the next posts.

Friday, April 27, 2012

Metaphysics in science, Part I: "Call us when you actually find something."

What constitutes a 'finding' in science? 
It is supposed to be a discovery of something about Nature, rather than, say, just an 'idea' about Nature.  It's supposed to be real and not a matter of metaphysics.  Metaphysics had a long history in philosophy, when philosophy was the lead-in to what we call science today.  In today's sneering world of science, science is fact, and metaphysics is made-up Blarney rather than stuff that's real.

But what is a 'finding' today?  Administrative interests are constantly on the prowl for results that their company, institutes, portfolio, or clients can use to help lobby for more funding, and the news media aren't far behind.  But everybody's busy, so what counts in science in this sense is something with a melodramatic picture that you can say faster than the word 'science', and that will grab the interest of someone whose attention span doesn't go beyond a Tweet.  Some populations are known to anthropologists by names like 'the basket weavers' -- we'll be known as 'the boasters'.

This attitude is everywhere and seen all the time, and it's detrimental to good science.  Cakes take a certain time to bake and not all food is fast food.  Science has to bake to come out as good as it should be, and can be.  Quick answers yelled from rooftops (of  Nature's offices) and rushed out of the oven for quick display purposes are notorious for false starts and hyped findings that are not confirmed later, for various reasons noted a few years ago by John Ioannidis. 

By Hooke or by crook science in the 21st century
When Robert Hooke first turned the microscope's eye onto nature, we got the first glimpse of things that had previously been impossible to see.   He documented many things in his 1665 Micrographia, like details of insect bodies, and the uneven surface of polished pins. Hooke turned micrographia into visible-scale drawings, and made flea hairs visible....and fleas were important!  We were naive then, and learned a tremendous amount from the new lens on the world.  Blowing things up was legitimate.



Today we live in a similar era, when every tiny finding, visible only through a massively-humongously-parallel-generation sequencer, is blown up--that is blown out of proportion, in our puffery laden, lobbying,  PR-driven world.  Unlike Micrographia, however, not all of today's fleas are actual 'discoveries' in the same sense.  Yet, the PR machine wants to report 'findings', and anything that can be claimed to be one (with a nice figure) is going to be trumpeted.

If it's more than 140 characters, it's not real!
We're pressured only to consider simplistic sound-byte-sized results as true 'findings',  or be embarrassed if we haven't got a slew of papers reporting our sound-byte sized things in 'high impact factor' journals, or hyped by the New York Times or the BBC. Apparently 'just' understanding Nature, most of whose traits are subtle and not melodramatic, isn't real science.  Hype is what the public is sold, it's what's sold on television and on front pages, and it's basically all that congressional staffers and their like are told about.

Now, this might be OK if the recipients of the hype--such as policy makers who have to cough up the funds--weren't so inundated by Everything They See is Phenomenal that they know full well how to ignore most of it.  Or, worse, if they actually don't know that all they see are snow jobs, we're in deep trouble.  So our version of micrographia has become the blowing up of mainly trivial things we hadn't seen before, and making them sound as if they were previously hidden giants.

Two negatives do not make a positive, but one does!
Consistent with all of this is the notorious under-reporting of negative results.  It's worse than unethical in the drug trial realm, because it leads to obvious bias, unjustified profiteering, and actual harm (sometimes lethal) to patients.  Sometimes it's intentional, but even when it's just that investigators think negative results are not worth reporting, or the 'premier' journals don't think it's worth bothering about (i.e., won't sell copy) and won't publish them, it's harmful because it's systematially biasing and misleading to science.

Take GWAS.  People do publish the results, but many of those who aren't boasting of their purported revolutionizing success (because of the 'positive' findings), are bemoaning the failure of GWAS.  "Well, see," they say, "you never find anything!"  GWAS are a failure!

Nobody thinks more than we do that GWAS and its 'next generation' successors are overselling in a bad way and often for bad reasons.  Nonetheless, and we've said this before, GWAS have been a fine success!  The 'negative' findings (no real blockbuster genes, but instead many tiny genetic contributions to risk), are not a negative but a positive: they positively tell us how complex nature is.  They are findings!

Findings about the real nature of Nature may be dramatic, as Hooke found in his day.  They do occasionally turn up in our own day and that makes science fun and interesting.  But most of Nature is complex and not amenable to quick-fix answers.  For a real appreciation of the object of science to plumb the truths of Nature, complexity itself, difficult to work out or explain (say, in evolutionary terms), should be thrilling enough: the grain without being blinded in a blizzard of chaff.

A subtle, nuanced, careful approach to science is often being overlooked these days because of the obsession with the current idea of what constitutes a 'finding'. Whether this state of affairs is just part of the game in a complex middle-class culture, or has tragic implications for the kind of work that could be done but isn't is anybody's guess.  Fixing the system would take major reform, and may not be in the cards given the nature of our society.

Thursday, April 26, 2012

Darwin vs Wordsworth: Is Nature cruel or beneficent?

In what looks an awful lot like cooperative behavior, groups of birds often get together to 'mob' a predator.  That is, they swarm predators together, in an attempt to chase them away from nests or from a food source, and so on.  Birds often make mobbing calls, that alert nearby birds to danger, and solicit their aid.




A new paper in Biology Letters suggests that "long-term familiarity" is a factor in whether or not birds choose to help each other when faced with threat from a predator.  A.M. Grabowska-Zhang et al. show that "neighbours that shared a territory boundary the previous year are more likely to join their neighbours' nest defence than neighbours that did not share a boundary before."

Predation is a major cause of death in nestlings, so driving predators from an area in defense of the nest is crucial.  And, the more birds that can mob a threatening predator, the more likely they'll drive it away, so soliciting the help of neighboring birds is also crucial.

Grabowska-Zhang et al. "tested the hypothesis that long-term familiarity between territorial neighbours is positively related to joining behaviour in predator mobbing."  They did this in a population of great tits breeding in next-boxes in Oxfordshire, in the UK. These birds have been tagged and followed in previous years, so that their ages and familiarity are already known.  The researchers served as predators, by approaching a nest and making noise, and then assessed the birds' response.
For pairs of nests where each contained at least one familiar individual, in 12 out of 16 trials (seven out of eight nest pairs), at least one neighbour joined the mob. Individuals from the unfamiliar group joined the mob in just two out of 16 trials (one out of eight nest pairs). No neighbours joined the mob in first-years' nest.
That is, they report that they've demonstrated a significant influence of familiarity on taking part in solicited mobbing behavior.  The idea that birds decide who to cooperate with is interesting one -- apparently, they don't help just anyone.  But, what interests us more is that the authors conclude that they can't tease out from this study whether the birds cooperate because they are good neighbors (altruistically), or because they figure they'll get help from their neighbors when they need it themselves (selfishly).  The same behavior can be interpreted in two very different ways.

This is not new to this study, of course -- altruism has long been explained away as selfish.  And similarly, cooperation as competition.  There is a danger in reading ourselves into what we see in Nature.  It's a problem of subjectivity intruding where we hope and strive to be objective to the extent possible.  The issue first of all can affect study design itself, and then the interpretation of results.  Thus, if competition is the lens of your view of Nature, you can design studies to find competition or evaluate organisms' success in comparative terms.  If cooperation is your bent, you can study what happens when organisms work together for whatever reason.   The truth, as this study shows, is typically a mix.

The danger extends to reading other work, in science but even in other areas.  One can mine important thinkers for statements supporting one's bias, just as can be done with Biblical exegesis.  For example, at about the same time, and totally unbeknownst to each other, two famously brilliant authors wrote about the awesome splendor of Nature.  Darwin looked upon Nature's 'grandeur' (his word) and saw beneath it a relentless, impersonal, and savage 'struggle for survival' against limiting resources.  In an 1838 notebook, he denigrated philosophers who were trying to understand life by saying that one would learn "more towards methaphysics than Locke" by understanding baboons.  But last night Ken was writing on something for Evolutionary Anthropology that referred to Darwin's quote.  He has also been reading the famous pastoral poems written at almost the same time by the poet laureate William Wordsworth.  Like Darwin, Wordsworth denigrated stuffy academics, remarking that one who wished to understand life should turn not to the work of philosophers but to Nature's magnificent panoply reflecting God's beneficent intent.

Birds may not think about competition vs cooperation in ways that we do, but in their own way they show us the nuances of Nature.

Wednesday, April 25, 2012

The ills, wills, and won'ts of science

Science is a rather large segment of our society, and a thoroughly human endeavor.  It's not apart from the rest of our social, economic, and political world even as it attempts to understand that world.  With thousands of universities needing students, faculty, and resources, all of us seeking prominence and recognition, the rather 'bourgeois' aspects of this social phenomenon are not unexpected. How could it be otherwise than that we'll establish hierarchies, advocacy, tribal factions competing both for ideas and funds or, more nobly, to show the others how (as we believe) the world in our area of expertise really is?

There will inevitably be pyramids of privilege and uneven wealth distribution, and in our society competition will drive this based on the widespread belief that competition (while harsh) is good for something, if not for the human soul.  In such an environment we can expect some outright cheating (pretty rare, fortunately), lots of sources of biased reporting and disingenuous 'null hypothesis' testing, dissembling, hyperbole and self-promotion.  Bureaucrats want to keep their portfolios of research projects large and richly funded.  University administrators want the overhead.  Journals want the material, and since they are businesses, the splashier the better (e.g., see this analysis of what's wrong with science publishing and how to fix it).  Ranking systems like 'impact factors' drive such bean-counting environments, naturally--how could it be otherwise?

Then there are the companies that make the instrumentation and other kinds of laboratory gear, including computers and software, that research depends on.  These companies, only naturally, will do what it takes to persuade us that their latest models are vital to success.  And what about the media?  They demand splash for their survival.  That's only natural, too.  And politicians?  They thrive on promises of health miracles, world dominance, scientific thrills, and various kinds of demagoguery by which fears are raised and their wisdom to fund research to relieve them are promised. 

So the hand-wringing and finger pointing about these problems are all only natural.  So, should we stop doing it?  We think the answer is absolutely not!  First, there will always be faults in any human endeavor, and in our type of society for a large endeavor, faults will be built (over time, by us!) into the system.  Some will corner markets better than others.  Most work will be chaff, even if there will always be amazingly insightful, skilled work that  positively contributes to knowledge.  For every Beethoven or Wordsworth,  Leonardo or Darwin, there will be a hive of drones who leave little mark on history.

Major changes rarely arise by brilliant new discoveries (the Darwins of the world), but most changes occur incrementally,  the tanker-of-science gradually changing course as fads come and go, glittering labs gradually fading as a new fad (whether good or bad) takes over.

Genome sequencing and GWAS as an example
We got many MT 'hits' last week by daring to point out that a substantial number of people are saying and writing that the payoff of GWAS and whole genome sequencing in large numbers of humans is not great and people may be tiring of it and the long-term cost commitment required, which prevents other areas (and investigators) from being funded.  There are large interests who are doing such work, and committed to it (for reasons that at least include their already vested interests, as well as various scientific rationales). They have large amounts of money, in many countries, and scientists know very well that a big project gains political investment that people will then be unwilling or unable to close down.  There are many examples, but big biobanks will be another that are just aborning.  They'll claim down the road that they are too big to fail, er, to have their funding cut. 

Of course, the focus or obsession on genes and 'omics' (large-scale, exhaustive generally hypothesis-free and  exploratory enumerations) may not fade.  Predictions that it is playing itself out by overkill or hyper-hype, may be wrong--we'll see.  Indeed, such commitments cannot fade very rapidly even if they deliver nothing at all (which isn't the case), if the hold on huge long-term funding is made.  But whether it pays off or not, there are many who feel it has co-opted too much else relative to its payoff.

This is but one example of the issues being raised about how science, The Enterprise, is being conducted these days, not by wealthy back-yard tinkerers but by a large middle class housed in large institutions.  Many worry about the faults, but of course they (and, sometimes, we) are on the cranky fringe that always exists.  Cranks can have their own agendas, including jealousy, of course.  But without at least some nudging from those who see the faults--and the faults in modern science are deep and wide--course corrections might be even harder to make, and lack of correction much costlier to the society that pays for it.

Tuesday, April 24, 2012

Genetic methods and technologies with potentially important payoff

Last week a group of cancer researchers, largely in the UK, announced in Nature that after characterizing the genome and transcriptome of 2000 breast tumors, they have found that breast cancers can be divided into 10 subgroups, or even 'different diseases', each with characteristic aggressiveness and response to therapy.  The 'transcriptome'  is the set of the 20-25,000 human genes that the tumor cells are actually using (other genes are presumably quiescent and not needed by those cells).

To us, this demonstrates a welcome appropriate use of genetics in an important problem -- even if  not so welcome is the typical hype that's come with the announcement -- this will 'revolutionize' treatment, and this is 'breakthrough research', and so on.  It's a problem when every story is accompanied by such hype, a boy-crying-wolf kind of problem.  This study may well be very important work, but let's let it prove itself before shouting from the rooftops.   The human genome sequence, after all, was going to allow us all to live forever by 2020 if not earlier, leaders proclaimed in the '90s.

From the Nature paper:
We present an integrated analysis of copy number and gene expression in a discovery and validation set of 997 and 995 primary breast tumours, respectively, with long-term clinical follow-up. Inherited variants (copy number variants and single nucleotide polymorphisms) and acquired somatic copy number aberrations (CNAs) were associated with expression in ~40% of genes, with the landscape dominated by cis- and trans-acting CNAs. By delineating expression outlier genes driven in cis by CNAs, we identified putative cancer genes, including deletions in PPP2R2A, MTAP and MAP2K4. Unsupervised analysis of paired DNA–RNA profiles revealed novel subgroups with distinct clinical outcomes, which reproduced in the validation cohort.
('Unsupervised analysis' refers to a particular statistical method used.)

Both germline (transmitted in sperm or egg) and somatic (body cell) variation was found to contribute to tumor occurrence and architecture.  The terms are technical but essentially all sorts of variation was found to be involved:  CNA's (copy number aberrations), CNV's (copy number variants) and SNPs (single nucleotide polymorphisms), either on the same chromosome as the contributing gene (cis) or on a different chromosome (trans):  all these contributed to variation in expression of genes associated with tumors.

a, Venn diagrams depict the relative contribution of SNPs, CNVs and CNAs to genome-wide, cis and trans tumour expression variation for significant expression associations (Å idĂ¡k adjusted P-value ≤0.0001). b, Histograms illustrate the proportion of variance explained by the most significantly associated predictor for each predictor type, where several of the top associations are indicated. [Figure and caption from the paper.]

If, as the paper suggests, integrating genetic information about germline as well as somatic aberrations, as well as tumor type helps to clarify decisions about treatment of breast cancer, this will prove to be a valuable application of genetic technologies and information.

This is just the the first step, however.  As the lead author, Carlos Caldas said, interviewed on the BBC Radio 4 program, Material World, on April 19, they've identified what he equated with continents, but the rivers, mountains, plains and other aspects of the landscape are yet to be determined. 

Indeed, this is a study of 'primary' tumors, and it is not clear what the story is if the tumor has spread to other parts of the  body ('metastasized').  Other recent studies have shown what cruder methods had previously shown, that new genetic changes occur that allow those secondary tumors to spread and grow.  Likewise treatment itself selects for cells that are by their good luck, and the patient's bad luck, resistant.  This study appears to show, assuming no post-study tumor recurrences, that the predictive methods can lead treatment to stay a step ahead of the tumor's evolution.

It's been clear for many years that somatic genetic changes may be important in disease, and cancer, which is a cascade of cells descendant from a founding aberrant misbehaving cell is the classic archetype.   In such instances, it makes sense to search for variation among cells within the individual.  Whether or not the pattern is too complex to be very useful, only time will tell.

If such work can reveal useful information as this paper claims, and treatment can be focused on patient-specific traits, there may indeed be something to shout from the rooftops.  Whether or not complexity again rises to bite, this study shows an appropriate use of high-throughput genetic technology.

Monday, April 23, 2012

Brains are like jelly....and they're fluid, too.

Intelligence is malleable?
Two pieces in the April 22 New York Times Sunday Magazine suggest that the idea that intelligence is fixed at birth has been greatly exaggerated.  We can get smarter if we work at it.  According to one piece, we have to exercise our fluid intelligence, and in the other, we have to exercise our bodies.

Fluid intelligence lifting weights
In 2008, two psychologists, Susanne Jaeggi and Martin Buschkuehl, published a paper in which they reported that young adults who play a challenging game requiring concentration can improve their "fluid intelligence", which the NYT article defines as "the capacity to solve novel problems, to learn, to reason, to see connections and to get to the bottom of things."
Psychologists have long regarded intelligence as coming in two flavors: crystallized intelligence, the treasure trove of stored-up information and how-to knowledge (the sort of thing tested on “Jeopardy!” or put to use when you ride a bicycle); and fluid intelligence. Crystallized intelligence grows as you age; fluid intelligence has long been known to peak in early adulthood, around college age, and then to decline gradually. And unlike physical conditioning, which can transform 98-pound weaklings into hunks, fluid intelligence has always been considered impervious to training.
The inflexibility of fluid intelligence has been the explanation for why we can't do better on I.Q. tests over our lifetimes.  Though, the pesky little problem of the Flynn effect, the sustained increase in I.Q. scores over decades in much of the world, has been a thorn in the side of those who hold that I.Q. is fixed.  And, even if people have never actually settled on what intelligence actually is, the idea that at least we know it's fixed, and that most studies show a considerable amount of heritability, has lead many to believe there must basically be due to the genotypes we're each born with.

Raven Matrix component of IQ test: fill in the blank square
Wikimedia Commons
So, if Jaeggi and Buschkuehl are correct that fluid intelligence can be improved with practice, a result they continue to demonstrate, this is a challenge to the idea that we're blessed or cursed with innate intelligence.  The idea is that intelligence must be similar to other highly heritable traits, like height, which is also susceptible to environmental effects -- even if within each individual's genetic or other constraints.

Mice lifting weights
The second intelligence story in the Sunday magazine comes at the issue from a different angle.  Mice given the chance to exercise get smarter.  Researchers determined this by giving them before and after cognitive tests, as well as before and after assessments of the structure of their brains.  And, as it happens, people who exercise get smarter, too.  Or at least their brains don't shrink nearly as much as they age as do the brains of sedentary people.
For more than a decade, neuroscientists and physiologists have been gathering evidence of the beneficial relationship between exercise and brainpower. But the newest findings make it clear that this isn’t just a relationship; it is the relationship. Using sophisticated technologies to examine the workings of individual neurons — and the makeup of brain matter itself — scientists in just the past few months have discovered that exercise appears to build a brain that resists physical shrinkage and enhance cognitive flexibility. Exercise, the latest neuroscience suggests, does more to bolster thinking than thinking does.
So, forget personalized genomic medicine, to get smart, just bike (or run) to work, thinking about something profound all the way.

Can this really be true?
Of course, Jaeggi and Buschkuehl have their critics.  Some simply don't believe that fluid intelligence is mutable, and studies continue to confirm this view. But J and B aren't the only psychologists who are beginning to find mutability and as a result, other psychologists are starting to believe their work. But, it's an interesting thing when expert assessment of scientific results depends on belief.  And the word is laced throughout the NYT piece.

Indeed, you're more likely to buy their work if you're not predisposed to think that I.Q. is genetically determined.  Well, and if you think I.Q. is real, measurable, not culturally determined and so on.  And where you come down on these issues seems to be correlated with your politics, at least to some extent.  Rather like where you come down on climate change, or evolution, or the genetics of how people vote.

But let's step away from the politics for the moment, and think about what our particular view of evolution might have to offer here.  Specifically, the idea that seems fairly obvious, that evolution has been consistently good at producing adaptability.  Over and over and over again, so much so that it seems to us to be a fundamental principle of life, organisms have been imbued with the ability to detect, evaluate, and adapt to changing circumstances.  So, to us, it's no surprise that our brains, too, can respond to changing circumstances, can respond to environmental challenges by, say, building new neuronal synapses.  It would be more surprising if it couldn't.  And changes in the brain can involve non-cognitive as well as cognitive intelligence -- that is, it need not involve consciousness as it often does in humans and presumably other animals.

Brains and central nervous systems are, after all, centers of evaluation.  Sensory inputs go there, and are sorted through and evaluated, and decisions made on how to respond to them.  The idea, no pun intended, is that the brain is not a pre-programmed, hard-wired automoton, but allows each unique moment to be sifted and judged, and even more, each moment can leave its mark.  Someone whose cognition is too rigid might be much more likely to be a former someone.

Brains have the texture of jello, but they're fluid as well -- food for thought at least.

Friday, April 20, 2012

Oh, Grandma, what big Scales you have!

One of the more interesting challenges in  biology is to explain the genetic basis of important traits in a given species, and how the trait arose. There are always evolutionary origins, we assume, but they can be difficult to work out. A classic example is teeth: we depend on them for eating and survival.  Teeth are hard, repetitive structures, produced by a wave of repeated use of the genetic mechanisms that produce each one.  But our earliest vertebrate ancestors didn't have teeth, or jaws for that matter. What they did have at an early stage in vertebrate evolution, was scales--hard repetitive structures over their bodies.

A long-standing issue is where teeth came from developmentally, and whether teeth are a recruitment of scale-generating genetic mechanisms.  While the issues are a bit technical, one of the leaders in work on this area from a genetic point of view is Dr Kazuhiko Kawasaki, a senior research scientist in our own group here at Penn State.   We asked Kazz if we could post his up-to-date exploration of this issue, and he kindly agreed.  Here it is:

Contributed by Kazuhiko Kawasaki
The Developmental and Evolutionary Origins of the Tooth
Among the most important innovations in the evolution of vertebrates is the tooth, which enabled active feeding. It has long been thought that oral teeth arose from scales on the skin surface, based on the similarities in these two structures in the shark (1). However, an extinct jawless vertebrate, called Loganellia scotica, was found to have scales on the body and tooth-like structures (called denticles) in the oropharyngeal cavity (the back of the mouth). Analysis of these structures led to a new hypothesis: oral teeth originated by co-opting the developmental control used in the oropharyngeal denticles (2). The former theory, that teeth originated from external scales, is called the 'outside-in' hypothesis, whereas the latter, that oral teeth came from oralpharyngeal teeth, the 'inside-out' hypothesis. The tooth and the denticle both develop between two primary tissue layers in the early embryo, epithelium and neural-crest-derived ectomesenchyme; hence a critical difference in these two hypotheses is the tissue origin of the epithelium, either ectoderm (scales) or endoderm (oropharyngeal denticles). In what tissue do these structures originate?

Teeth (left) and scales (right) of the nurse shark.

Until recently, the epithelium involved in oral tooth development was thought to be derived from ectoderm that is located near the border with endoderm. Soukup et al. updated this premise using a transgenic Mexican axolotl that produces green fluorescent proteins (GFP) in the whole body (3). They transplanted GFP producing ectoderm to a normal animal and injected a red dye into endoderm of the transplanted animal at an early developmental stage. The result was that some teeth are labeled in green, others in red, and still others in both green and red, showing that teeth are of ectodermal, endodermal, or a mixed ecto/endodermal origin. Based on this result, the authors suggested "a dominant role for the neural crest mesenchyme over epithelia in tooth initiation and, from an evolutionary point of view, that an essential factor in the evolution of teeth was the odontogenic capacity of neural crest cells, regardless of possible 'outside-in' or 'inside-out' influx of the epithelium".

While the developmental and molecular data demonstrate the essentially identical nature of ectodermal and endodermal teeth, the result does not answer the question about the evolutionary origin. Yet, this study appears to have triggered many subsequent studies, and the outside-in, inside-out, and other modified theories have been proposed as a result (4, 5). A recent analysis showed a gap in the phylogenetic distribution (that is, among different vertebrate lineages) of the internal denticles in Loganellia and oral teeth in jawed vertebrates. Thus, the morphological similarities in these two structures are likely the result of convergence, unrelated evolution of the same trait, refuting the evidence that supports the inside-out hypothesis (6). Further, tooth-like scales were discovered in the cheek and the lip of an Early Devonian jawed vertebrate, suggesting "the existence of a field of gene expression near the mouth margin in which scales could be transformed into teeth" (7). These scales and teeth share a similar structure, and probably used common genetic machinery for mineralization. It is also likely that this transformation was caused by a slight change in the gene regulatory network, responding to a signal from neural crest cells, if the tissue origin does not strictly determine the fate of epithelium.

Here are some references about these issues:


Thursday, April 19, 2012

Fat chance that obesity would turn out to be simple! But, then, what is it?

That obesity rates are high in poor neighborhoods because these areas are "food deserts", home not to grocery stores that provide fresh produce but to fast food restaurants and liquor stores, is an idea that has gained traction in recent years.  Is it true?  Gina Kolata of the New York Times reported yesterday on two new studies that say no. 

Kolata writes:
...two new studies have found something unexpected. Such neighborhoods not only have more fast food restaurants and convenience stores than more affluent ones, but more grocery stores, supermarkets and full-service restaurants, too. And there is no relationship between the type of food being sold in a neighborhood and obesity among its children and adolescents.
Within a couple of miles of almost any urban neighborhood, “you can get basically any type of food,” said Roland Sturm of the RAND Corporation, lead author of one of the studies. “Maybe we should call it a food swamp rather than a desert,” he said.
The solution to obesity, some have said, is to provide fresh produce.  Families will buy it and children will eat strawberries and lettuce rather than hamburgers and fries, and obesity rates will fall.  But Helen Lee, in her study published in Social Science & Medicine in April, finds that
...children who live in residentially poor and minority neighborhoods are indeed more likely to have greater access to fast-food outlets and convenience stores. However, these neighborhoods also have greater access to other food establishments that have not been linked to increased obesity risk, including large-scale grocery stores. When examined in a multi-level modeling framework, differential exposure to food outlets does not independently explain weight gain over time in this sample of elementary school-aged children. Variation in residential food outlet availability also does not explain socioeconomic and racial/ethnic differences. It may thus be important to reconsider whether food access is, in all settings, a salient factor in understanding obesity risk among young children.
She used data from the Early Childhood Longitudinal Study kindergarten cohort (ECLS-K), 1999-2000, as her source of data on body mass index (BMI) and residence in a random sample of children in kindergarten through fifth grade.  She used longitudinal data on all businesses in the nation from 1992 to 2006 for data on what kinds of businesses are in which neighborhoods, and she categorized food availability by type of food store from full supermarket to corner store.  From these data she concludes that "food outlet exposure holds no independent relationship to child weight gain." 

The second study, published in the American Journal of Preventive Medicine in February, reports no "robust relationship between food environment and consumption".  This study is based on dietary and BMI data from the 2005 and 2007 California Health Interview Survey (CHIS), and "food environment" data measured as "counts and density of businesses" categorized by type and distance from a respondent's home or school.  The authors found no correlation between food availability and BMI.

What do the findings of these two studies mean about childhood obesity?  And, do they prove that the idea of food deserts is an urban myth, unrelated to obesity rates?

To us, these studies suggest several things.  Both base their information about food availability on population-level data, and even though they have data on children's individual weights and heights, this tells us nothing whatsoever about where their parents were shopping for food nor what they bought nor what the children actually were in the habit of eating.  Assuming that it means anything at all about this is an example of the 'ecolological fallacy', the imputation of population-level data to individuals.  Even more problematic is the assumption that because fresh produce is available, children are eating it.  Yes, the CHIS data included some information on what children reported eating, but dietary information collected in this way is notoriously unreliable.  Indeed, dietary information collected in any way is notoriously unreliable.

So, although these studies may indeed show that poor neighborhoods are not the 'food deserts' they've been assumed to be,  as they set out to do, this tells us little to nothing about the causes of obesity in poor neighborhoods.  Or indeed anywhere -- there are plenty of overweight and obese children in middle and upper class neighborhoods as well.

And, even if produce is available, it can be more expensive than processed foods, and often requires preparation time, and, anyway, must actually be consumed to have any health effects. Reducing the very real obesity epidemic to an issue of available fresh foods ignores the complexity of the cultural issues overlaying food, eating, how children spend their free time, whether active or sedentary, and so on.

The dream of simplicity redux
The obesity epidemic is an example of another complex trait that many have wished to reduce to simple causes, be they single genes or single environmental contributors. So, while we readily criticize genetics and its many 'omic' children for its very expensive, low-payoff, often self-serving nature, it's only appropriate to extend the same critique to epidemiology that is perhaps even more costly and often--and often for the most important problems--delivers even less cogent payoff.  The epidemiology empire is every bit as large and self-serving as anything in genomics.

But then, with all of its resources, why are questions such as the origin and nature of obesity, and even to some extent its actual relationship to health outcomes, proving to be so challenging?   How can we not, after decades of 'practice' at large-scale megavariate studies, know the answers to even some of the most fundamental questions?  A large biostatistics industry has grown up around this work, with spillover into genomics.

Is chronic disease epidemiology yet another case of technology driving rather poorly framed questions?  Is it another clear indicator that our reductionistic, enumeration-driven approach to science somehow an inappropriate epistemology--that there are better ways to ask the question, or better ways to understand the probabilistic, complex world, than what we have developed so far?

Wednesday, April 18, 2012

Bulking up....or not: what chaos can tell us about order

A paper in last week's Science, (Using Gene Expression Noise to Understand Gene Regulation, Munsky et al.), as part of the special issue on computational biology, asks whether what looks like noisy gene expression can be informative about gene regulation.  Gene expression within a single cell is a topic of growing interest.  It has seemed to be fairly random, but Munsky et al. suggest if you look closely enough, the randomness can be indicative of some quite regular processes.  Mechanical engineer Brian Munsky and colleagues have used a similar approach to identify gene regulatory networks.

If molecules interact in a probabilistic way--bouncing randomly around the cell until they perchance (literally) bump into each other, and if each cell has countless zillions of molecules buzzing around in this way all the time, and if it's clear that the cells even in the same tissue in the same person (and hence the same genotype) are each a bit different....then how come we're so highly organized into tissues and organs that mostly work mostly correctly.....rather than being just a jiggling blob of formless jelly?

Sad to say, but Prairie Home Companion can't be true!
One obvious possibility is what one could call the law of large numbers, or a principle of central tendency.  All of these random motions have an average behavior, just like everybody's height or glucose levels vary but most of us are somewhere near the middle.  Unlike Minnesotans in Garrison Keillor's Prairie Home Companion, not all the children are above average!

With large numbers of cells in a given organ, there will be variation but only a small percent of cells will behave very differently from the average, and even if they are very naughty indeed, their effect on the organ as a whole--and, say, on the person's health--is trivial: the vast majority of well-behaving cells cover for the wayward ones.  And, indeed, we have bodily systems to detect cells that are far too misbehaving.  When they fail we can get nasty conditions such as cancers.

So how is this stochastic (probabilistic) buzz-fest made manifest at the level of individual genes and their levels of expression (use) by cells?   As the authors of this paper note, because of the vagaries of Brownian motion, two cells, even those produced by the same progenitor cell, will never be identical at the molecular level.  Thus, things like the number of transcription factor molecules per cell, needed to cause specific other genes to be expressed, are unlikely to be identical, and this cell-to-cell variability will affect gene expression levels among cells, and ultimately can lead to phenotypic variability as well.
Consider a single mother cell dividing into two daughter cells of equal volume. During the division process, all the molecules in the mother cell are in Brownian motion according to the laws of statistical mechanics. The probability that each daughter cell inherits the same number of molecules is infinitesimally small. Even in the event that the two daughter cells receive exactly one copy of a particular transcription factor, each transcription factor will perform a Brownian random walk through its cellular volume before finding its target promoter and activating gene expression. Because Brownian motion is uncorrelated in the two daughter cells, it is statistically impossible for both genes to become activated at the exact same time, further amplifying the phenotypic difference between the two daughter cells.
Munsky et al. use cell-to-cell variability in gene expression as a way to understand gene regulation, and they present quantitative models for this. Genes can be expressed all the time, or their expression can be episodic or timed -- this is 'constitutive' vs 'regulated' expression.  Taking into account copy number of transcripts of genes of interest, they determine that when transcript births and deaths are not related, and seem to follow a Poisson distribution (that is, they are independent events that occur at a known average rate, but with different probabilities of any specific rate); this indicates constitutive expression.  Deviation from the Poisson distribution--too many rare or too many common copies, for example--suggests regulated or episodic expression; expression of a gene within a cell can be more or less tightly regulated over time, and can switch states.

Documenting and making sense of transcript levels in single cells, the authors write, can be informative about gene regulation as well as gene networks in ways that looking at gene expression in multiple cells or tissues can't be.  This is because average statistics, such as of the number of transcripts of a particular gene among cells, mask the distribution within each cell, and so regulatory mechanisms can't be inferred. Yet most cellular studies are of test-tubes full of the 'same' kinds of cells, analyzed in aggregate, masking this informative, underlying variation.

Hounds and Hares
If, as the authors predict, sequencing of all the transcribed genes in a single cell becomes routine, understanding gene networks will be easier.  And it will account for our orderliness as organisms.  This can be seen at higher levels, in ordinary experience, too, as this example may help make clear:

Hounds chase hares, but each hound and each hare live individually different lives.  If we want to understand the overall organization of the hound-hare part of the ecosystem, such as how their respective populations vary over space and time, we can look at the aggregate behavior: the chance a hound will sniff a hare, the chance it will catch the hare and so on.  But if we want to understand details, we might have to follow a number of individual hounds and hares, because not all will be equally successful in their hunt, and the circumstances of the hunts will vary.

Bulking up....or battening down?
These ideas apply to situations when there are many 'identical' cells, as in a given organ.  The central tendency would seem to provide safety in numbers.  But if that's the case, how big do the numbers have to be to protect the organism from its fraction of unusually behaving cells?  This will depend on many things, including the 'variance' (variation from cell to cell) of the process, how many cells are in sync at each time, and so on.

There's another important issue. In complex tissues, gene expression is changed via various processes that we can call 'signaling'.  Cells sense their surroundings and respond to them.  They send out signal molecules appropriate for their location.  This reinforces cells in a given tissue to do what's appropriate for the tissue.  And, importantly, signaling can be homeostatic:  some signals are called 'activators' because cells detecting the signal's presence activate that same gene, or some set of response genes, as a result.  Other signals are 'inhibitors' and induce cells to do the opposite.  Waves of expression can generate waves of tissues (like hairs or scales) in an embryo, but activation and inhibition interactions can also generate stability, as signal levels can lead cells too far out of line, so to speak, to fall back into line--to batten down and stay within acceptable limits.  Could this be relevant for the question at hand?

In organs with lots of cells, say in the millions, perhaps that is bulked up enough to bar the door against cellular chaos.  But what about smaller organisms, of which there are many, in whom, like small fleas  biting the backs of larger fleas, the organs can be very small indeed?  Is there any evidence that their organ stability is less, or is more vulnerable to vagrant cells?  Does natural selection work differently in such organisms (e.g., they have to reproduce more or faster, to stay viable as a population)?  We haven't thought about this directly, though we did refer to the issue of embryonic selection as contrasted with competitive Darwinian selection in our book.  Perhaps there's no issue here, or perhaps there is something interesting to follow up.


If we're understanding this paper correctly, it's an interesting application of physics to biology.  This might seem to resemble the ideal gas law in chemistry and physics.  There, for a given container and number of molecules, the pressure or temperature of any gas follows the same law.  But this does not require tracking any individual molecules.  In a way central tendencies of similar cells are like that.  But since each cell is different, each gene is different, and gene expression can affect other gene expression, cells are not like containers of oxygen or hydrogen.  Still, when large numbers of molecules are involved, the distribution of traits among similar cells do seem to follow orderly statistical properties.

Tuesday, April 17, 2012

A bit of a storm

Our post yesterday asking whether support for whole genome sequencing was fading seems to have triggered a bit of a storm.  We know this because our hit count for the day was astronomical (well, for us).  We noticed that a bunch of tweets were sending readers our way, so, naturally enough we thought we'd check out what people were saying about the post on Twitter.  And it was interesting.

Most people, though not all, who made an editorial comment disagreed with us.  And, ok, it's hard to go into detail in 140 characters, but the comments were pretty uninspired, shall we say (along the lines of "Is whole genome sequencing fading? The answer is No!"), but even so, to us, an indication that we'd hit a nerve.  As far as we can tell, the argument is that because sequencing is still being done, it should continue to be. 

This looks to us basically like some serious circling the wagons going on.  People with vested interest in the status quo protecting their interests.  Ok, fair enough, and understandable.  But, this does the science a disservice.  There are serious issues here -- tweeting about how sequencing has to happen because it's happening just doesn't do them justice.

As Ken posted yesterday, writing about why whole genome sequencing hasn't met the promises made about it:
There are too many variants to sort through, the individual signal is too weak, and too many parts of the genome contribute to many if not most traits, for genomes to be all that important--whether for predicting future disease, normal phenotypes like behaviors, or fitness in the face of natural selection.
As he also wrote, there are some traits for which one or a few genes are important, and working those out is where the genetics money should be spent.  Doing whole genome sequencing because we'll surely learn something even if we don't yet know what, or because personalized medicine is just over the horizon, or just because we can, are not good reasons to keep spending the kinds of money on this that we're spending.  We know enough now to know that genomic contributions to most traits are multiple, varied and complex.

This is not an admission of defeat.  This is an acknowledgement that we've learned a lot of genetics in the last century, reinforced clearly by the new sequencing technology; and what we've learned is that most traits are multifactorial, due to gene by gene and/or gene by environment interactions, there are most often many pathways to the same phenotype, and so on.  We should give up the conceit that we're going to be able ubiquitously to predict and prevent diseases based on genomes, and get on with solving problems.  Those that are genetic need genetic approaches.  But there are other issues, and other ways, to learn about evolution, disease, and the basic nature of life.

Monday, April 16, 2012

Is whole genome sequencing fading? Will it rebound (or relapse)?

There are various informal indicators that funders are losing enthusiasm for human whole genome sequencing.  We've seen discussions of 'genome fatigue' in the media (Carl Zimmer, e.g., talks about this here), and one colleague said there wasn't much enthusiasm for whole genome sequencing because we hadn't found the cure for cancer or made highly useful personalized predictive medicine.  Another colleague on an NIH grant review panel said that particular panel, at least, wasn't going to fund any more GWAS studies.
DNA sequence data, Wikimedia Commons

If this turns out to be more than a few anecdotes or personal opinions, and is actually occurring, it's understandable and to be lauded.  As we think we can truthfully claim, we have for years been warning of the dangers of the kind of overkill that genomics (and, indeed, other 'omics' fads) present:  promise miracles and you had better deliver!

The same thing applies to evolutionary studies that seek whole genome sequences as well as to studies designed to use such data to predict individual diseases.  There are too many variants to sort through, the individual signal is too weak, and too many parts of the genome contribute to many if not most traits, for genomes to be all that important--whether for predicting future disease, normal phenotypes like behaviors, or fitness in the face of natural selection.

There are some traits, especially if close to a specific protein, in which only one or a few genes are important.  There are many genes which, if broken by mutation, can cause serious problems.  And as we've said numerous times, this is where the genetics money should be spent.  But the nature of evolution is that it has produced complexity by involving numerous cooperating genetic elements, and traits are typically buffered against mutations.  Otherwise, organisms couldn't have gotten so complex (try making a brain or liver with just one gene!).  Otherwise, with so many genes and ever-present mutation, nobody in any species would ever survive.

The instances of single-gene or major-mutation causation are numerous and real.  They are already handled by services like genetic counseling in biomedicine, and by evolutionary or experimental analysis.  But the important nature of Nature is its complexity and at present whole genome sequence data provide too much variation for us to deal with on adequate terms.

Nature screens the success of organisms on their overall traits, regardless of what genotype contributed to it.  Many of the contributing variants to a given trait are new mutations or are very rare in the population, and very difficult to detect in terms of assigning 'risk' to them.  Worse, they flow through the population all the time, as individuals die and new ones are born. Since their individual effects depend on their context--the ever-changing environment and the rest of the genome--these effects are also fluid.  Thus, enumerating causal variants may not be a very useful way to understand biological causation.

Of course, rumors of the demise of ever-higher throughput genomics may be greatly exaggerated.  Funding may not actually be diminishing, or may return.  Whether that will be a rebound towards good science, or a relapse of low payoff, is a matter of opinion.

Friday, April 13, 2012

New genes, old function

Ken mused a while back here on MT about the improbability of finding a DNA sequence that had no similarity to sequence from any known organism.  And this is as we'd expect, if all life on Earth shares a common ancestor, and nothing that has been discovered since Darwin first proposed this in 1859 suggests otherwise.

So, why are researchers reporting DNA enzyme sequences that don't appear to be homologous to any known sequences for similar enzymes?  François Delavat et al. have sequenced genes from organisms found in an acid mine drainage, organisms that are resistant to being cultured (i.e., that can't be readily grown up in the lab), looking for novel genes or function.  They report their findings in the open-access Nature journal, Scientific Reports ("Amylases without known homologues discovered in an acid mine drainage: significance and impact").

Amylases are enzymes that catalyze the breakdown of carbohydrates, or in the case of bacteria described here, degrade polymers found in their acidic, metal-heavy surroundings.  Amylases from bacteria that grow in culture have been well-studied, and have been classified into several families based on their structure and other characteristics.  Some have been found in extreme environments, but no one had reported the sequencing of DNA from Acid Mine Drainages before this paper.  These are very low pH, very high metal environments. 

The authors
...decided to perform a function-based screening for the well-known amylases, using standard techniques. This strategy allowed the isolation of 28 positive clones, 2 of them being subcloned, the proteins purified and characterized in vitro. In silico analyses based on the nucleotidic sequence and both the primary and the predicted tertiary structures revealed that they are completely different from other known hydrolases as both genes encode a « protein of unknown function » and display no known conserved amylolytic domain. Nevertheless, in vitro tests confirmed the amylolytic activity of these 2 enzymes.
That is, these genes did degrade polysaccharide, but neither of the subclones matched any known amylase sequences in the databases. 

As Delavat et al. point out, much is known about lab-friendly bacteria, and a whole lot less about organisms that can't be grown in the lab.  Thus, if these results are confirmed, the fact that these genes, from organisms that were found in an extreme previously unexplored environment, don't look like other known amylase sequences doesn't at all suggest that these bacteria are unique, or that they don't share the same common origin the rest of us share.  Rather, it suggests that the lab-centric biology of the last century has given us a lab-centric view of the world.  It's no surprise that bacteria that live in the low pH, high metal extremes of Acid Mine Drainages would have evolved particular enzymes appropriate for that environment.  But it's also not a surprise that these enzymes have a function that is common to bacteria in every environment.

That the genes that code for these enzymes are unlike any of the subset of amylases yet described is another example of phenogenetic drift, the conservation of a biological trait or function even when its underlying genetic basis has changed.  These genes may look novel now, but as more bacteria are characterized from non-lab environments it's likely that more will be found that share some of the characteristics of these amylase genes. 

Note, also, that these sequences are genes -- they have protein coding structures and are identifiable from sequences as such.  They are not 'random' sequences with no known relation to the usual characteristics of genes.

Thursday, April 12, 2012

Be the person you were! Dope up....or be a dope!

The latest news (we heard it on the BBC World Service on Wednesday the 11th of April, but here's the story on the Daily Mail site) is about the desperate need of those working Wall Street to be more 'competitive' (with other sharks) in keeping their hands deeply in your pockets.  These Street walkers are worried they may be losing that 'special' trait that lures customers to their dens of (in)equity.

Now, it seems, a noble new service has been started by a clever entrepeneur who managed to graduate from some medical school:  He is luring aging bilkers to spend fortunes to 'adjust' their hormone balances, mainly being goosed up with testosterone.  That'll help them keep the competitive edge!

The male hormone testosterone has become an unlikely drug of choice for Wall Street traders seeking to give themselves an edge over their professional rivals.
New York clinics have reported a rise in treatment for 'testosterone deficiency', sometimes known as 'andropause'.
They say many workers in the male-dominated industry are hoping that boosters of the hormone will help them perform better at work and put in longer hours.

Like ads in airline magazines: "Do you have LowT?  Feel like you did when you were 25!"  The company, modestly housed high in Trump Tower, charges $4500--and that's just for the consultation!  Still, one socially responsible person, a hormone-adjustment  'customer' of this 'medical' service, said the cost was steep but he was glad that he, unlike some of his competitors, could afford it.  Such a good soul, he!

A colleague of ours once said the most intelligent people were the most gullible (we forget his reasoning, and we won't say whether we believed him or not, because it would reveal our own intelligence).  Here, however, is an analog or homologue of that generalization:  the greediest people are most gullible to greed.  Or something like that.  Women brokers, or perhaps broken women on the Street, apparently as well as broken men benefit by a good dose of testosterone, though less than men are given, to prevent things like the growth of facial hair.  Not very 'Brazilian', but what the hell if a buck is to be made.

This story has little to do with the topics of MT, but seemed to be a good story about the presumptive intrusiveness of science into everyday life these days, trying to design, re-design, or in this case retro-design us to have whatever trait we think will give us an edge in life.   At least, unlike the early 20th century wishes of a similar sort, monkeys won't have to be sacrificed to get extracts from their 'glands' to boost the combativeness we need from our hedge-fund managers. (See this on the promoter of that treatment, Serge Voronoff.)

Figure from site about Voronoff.
But later on, when these hairy, deep-voiced, muscle-bound hyper-sexed 90 year-olds start getting diseases, will geneticists remember to ask if they've been Trumped up in the past, in case it's a risk factor?

In the absence of that kind of regulation (or retribution) for their attitude, there really doesn't seem to be anything the snakes on Wall Street won't stoop, or kneel, to do.

Well, at least we thought this might make an entertaining break from attempts on our post to discuss  real science rather than what amounts to 21st century 'monkey gland' hawkery.

Wednesday, April 11, 2012

The next challenge in malaria control - artemisinin resistant parasites

Anopheles mosquito, Wikimedia Commons
Sometimes the news about malaria is good, as recently when deaths from malaria were reported to be decreasing, even if inexplicably, and sometimes it's not so good.  Last week saw two not-so-good stories -- one in The Lancet and one in Science -- about the increase in anti-malarial resistance in the Plasmodium falciparum parasite.  The Lancet paper documents this on the border between Thailand and Burma, and the Science paper reports the identification of the genome region in the parasite that is responsible for this newly developing resistance.  Because the parasites are becoming resistant to the best anti-malarial in use today, arteminisin, this is a serious issue.

The Science paper sets the stage:
Artemisinin-based combination therapies (ACTs) are the first-line treatment in nearly all malaria-endemic countries and are central to the current success of global efforts to control and eliminate Plasmodium falciparum malaria. Resistance to artemisinin (ART) in P. falciparum has been confirmed in Southeast Asia, raising concerns that it will spread to sub-Saharan Africa, following the path of chloroquine and anti-folate resistance. ART resistance results in reduced parasite clearance rates (CRs) after treatment...
As the BBC piece about this story says, "In 2009 researchers found that the most deadly species of malaria parasites, spread by mosquitoes, were becoming more resistant to these drugs in parts of western Cambodia."  This will make it much harder to control the disease in this area, never mind eradicate it.

Most malaria deaths occur in sub-Saharan Africa, and the spread of resistance to this part of the world would have disastrous public health consequences.  There is no therapy waiting in the wings to replace ACTs.  Whether the newly identified resistance is because infected mosquitoes have moved the 500 miles from the initial sites where resistance was found toward the border or because the parasites spontaneously developed resistance on their own is not known.  If the latter, this suggests that resistance is likely to arise de novo anywhere that artemisinin is in use -- and that's everywhere malaria is found, as ACTs are the most effective treatment currently in use.

This is, of course, evolution in action, artificial selection in favor of resistant parasites.  It's artificial because we're controlling 'nature' and how it screens.  Normally, selection that's too strong for the reproductive power of the selected species can mean doom -- extinction.  Blasting the species with a lethal selective factor can do that.  In this case, we'd like to extinctify the parasite.  But selection in a rapidly reproducing species is difficult because if any resistance mutations exist, the organisms bearing them have a relative smorgasbord of food -- hosts not hosting other parasite individuals, and this can give them an emormous selective advantage.  So the artificial selection against susceptibility is also similarly strong selection for resistance.

Unfortunately the development of resistance is inevitable when a strong selective force such as a drug against an infectious agent is in widespread use against a prolific target.  And it shows why the idea that Rachel Carson was personally responsible for millions of deaths from malaria because she pointed out in her 1962 book, Silent Spring, the harmful environment effects of DDT, an insecticide that effectively kills non-resistant mosquitoes, is short-sighted.  If its use against mosquitoes had been widespread and sustained, it would have long ago lost its efficacy.

The inevitable rise of resistance to treatment is why prevention or, even better, eradication are the preferred approaches.  Unfortunately developing a vaccine against malaria is proving to be a scientific challenge, and similarly evolutionary considerations will apply; and eradication, while doable in theory, is a political and economic challenge, and could involve the same resistance phenomenon if not done right.  So, the documented rise of drug resistant P. falciparum on the Thai Burma border is a severe blow.

We don't happen to know what, if any, intermediate strategies are being considered or tried.  Multiple moderate attacks, with different pesticides or against various aspects of the ecology or life-cycle might not wipe individuals out so quickly, but may 'confuse' them so that no resistance mechanism can arise because those bearing the new mutation protecting from agent X would be vulnerable to agent Y.  A complex ecology of modest selective factors, could possibly reduce the parasite population to a point where it really did become lethally vulnerable to some wholesale assault.

Or would it be necessary to accept some low level, but not zero, rate of infection to prevent major resistance?   Small pox and polio would seem to suggest that real eradication is possible, but how typical that can be expected to be, is unknown (to us).

Tuesday, April 10, 2012

Changing the diagnosis? Nature does it, too!

Here is a story that discusses the changing diagnosis of autism, a hot topic this week in the science news.  The doctor interviewed, Dr Bryan King, has spent the last 5 years working with a committee charged with revising the diagnosis of autism for DSM-5, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders, the diagnostic standard for psychiatric illnesses.  Reports of a dramatic increase in the prevalence of autism, along with genetic findings revealing autism's complexity (which we've posted about), are in the news.  So Dr King, involved in setting the standard diagnostic criteria for autism or autism spectrum disorder (ASD), is interviewed about the process.

Obviously, neither environmental nor genetic factors cause 'autism' per se, if the very meaning of the term changes. ASD is in some ways a cultural trait, since it's we who define it.  If we change our definition, the risks associated with specific genes or environments necessarily change as well--yet, in physical terms, they have clearly not changed at all!  If a genetic variant conferred a risk of, say, 0.5% of autism 10 years ago, then today on average it would confer nearly 1.0% (twice as much as before).

This is one problem with doing the genetics of 'autism' when the trait you're doing the genetics of is a moveable target.  The politics and other aspects of the diagnostic criteria may or may not be proper, but certainly the behavioral cutoff is cultural both in the sense of its manifestation in a given cultural setting, but also in the way that setting sets diagnostic criteria.

Relevant to MT is that Nature probably works the same way.  Here the key issue is natural selection.  Natural selection is a screen of organisms for traits that are more, or less, compatible with local circumstances.  But those circumstances change, sometimes rapidly.  Thus, like cultural definitions, the criteria that determine the relative fitness--reproductive success--are changing.  This means that here, too, the fitness of particular genetic variants is context-dependent, not fixed or absolute.

This is one of the challenging aspects of evolutionary biology, because it is tempting to view a genotype as inherently good or bad, inherently likely to succeed or not.  That makes theory and modeling of natural selection, evolution, and species formation tractable.

But Nature may not be like that.  If fitness is a shifting phenomenon, which it certainly is to at least some extent, then everything is context-dependent, and relative to circumstances, all the time.  So many of the scenarios proposed to account for what we see today may have a degree of the arbitrariness of the definition of a given trait, like autism.

Monday, April 9, 2012

SuperSize me! Nothing in American can be small (except genetic risks?)

In evolutionary biology, perhaps especially human evolution and anthropology, and biomedical genetics the current working mythology....er, we mean 'model'....is of strong, rapid, definitive natural selection as 'the' mechanism by which traits we see today got here.  Since adaptation only works through what is inherited (environmental effects, so to speak, die with the individual), the same kind of simple-cause deterministic thinking has been applied to the genetic control of current traits.

There are all sorts of reasons to expect, or hope, that cause and effect will be simple.  Single-gene causation of adaptation means we can find 'the' gene that explains why you vote or mate as you do, have a particular  disease or physical trait, and so on.  Pharma doesn't want to invest in profit-less rare traits, or complex traits for which a single med will only help a small fraction of patients.  And, of course, simplicity lends itself to melodrama and hence to the visual and even the print news.

But what we see are a multiplicity of individually small effects, as last week's papers on autism (the subject of our post on Friday) show yet again.  This is disappointing, but why is nature that way?  There are several reasons to believe that the apparent complexity is, in fact, the truth.

This should surprise no one.  For example, mutations conferring simple strong effects on disease-susceptibility will be quickly eliminated by natural selection.  Genes fundamental to many other genes because of interactions, may be specifically vulnerable to such mutations--so we may not find many risk alleles in  those genes. 

If many genes contribute to a trait, their individual effects almost necessarily will be even smaller.  This clearly is the case for the kinds of traits that are the main targets of GWAS and similar approaches.

Most genes that confer high risk would be eliminated by selection unless, as some argue, recent environments make them harmful (e.g., causing diabetes or cancer), whereas they weren't harmful before.  If their effects were slight or of late onset, they would not impair reproductive success, and would stay around in the population.  This doesn't seem to be the case.  In most GWAS'ed traits, risk has risen rapidly and greatly during the past century.  Yet the evidence is not that a few genes with major response to these environmental changes are responsible for the disease: indeed, the GWAS problem is precisely that this is not what we find!

Note also that traits not present at birth, meaning most GWAS'ed traits, take decades to manifest themselves.  The risk difference between variants at the 'risk' genes is usually very small, meaning that they change the risk at any given age by trivial amounts.  We may not want to get such diseases, but from a biological point of view they are really miniscule effects.  This also easily and non-suprisingly accounts for the findings of the recent paper of  low concordance of age and cause of death relative to genotypes in identical twins.

The very same arguments apply to the ability of natural selection to detect these differences, and that in turn clearly explains why it is so difficult to find 'signatures' of natural selection in genomic data, and why again in turn most selective arguments that refer to specific genes are without strong support beyond neat stories one tells about them (as we see in the news almost daily, and report here on MT).

When a gene has a true, but tiny, affect on risk (or on evolutionary fitness), there are so many competing causes of death or disease, or bad luck, that the odds on that gene's effect actually being manifest (as disease, or fitness) are simply very very small.

These are not complicated ideas to understand!  They are not our own private theory.  They're plainly visible in the mountain of facts we already have available to us (without huge, costly biobanks and promises of personalized medicine or strong adaptive arguments).

Traits like disease or adaptation may be major--nobody wants cancer, but in trying to find 'the' gene or few genes that are responsible, we're making mountains out of biological molehills.