Tuesday, May 26, 2015

Medical research ethics

In today's NYTimes, there is an OpEd column by bioethicist Carl Elliott about biomedical ethics (or its lack) at the University of Minnesota.  It outlines many what sound like very serious ethical violations and a lack of ethics-approval (Institutional Review Board, or IRB) scrutiny for research.    IRBs don't oversee the actual research, they just review proposals.  So, their job is to identify unethical aspects, such as lack of adequate informed consent, unnecessary pain or stress to animals, control of confidential information, and so on, so that the proposal can be adjusted before it can go forward.

As Elliott writes, the current iteration of IRBs, that each institution set up a self-based review system to approve or disapprove any research proposal that some faculty or staff member wishes to do, was established in the 1970's. The problem, he writes, is that this is basically just a self-monitored, institution-specific honor system, and honor systems are voluntary, subjective, and can be subverted.  More ongoing monitoring, with teeth, would be called for if abuses are to be spotted and prevented.  The commentary names many in the psychiatry department at Minnesota alone that seem to have been rather horrific.

But there are generalizable problems.  Over the years we have seen all sorts of projects approved, especially those involving animals (usually, lab mice).  We're not in our medical school, which has a distant campus, so we can't say anything about human subjects there or generally, beyond that occasionally one gets the impression that approval is pretty lax.  We were once told by a high-placed university administrator at a major medical campus (not ours), an IRB committee member there, that s/he regularly tried to persuade the IRB to approve things they were hesitant about....because the university wanted the overhead funds from the grant, which they'd not get if the project were not approved.

There are community members on these boards, not just the institution's insiders, but how often or effective they are (given that they are not specialists and for the other usual social-pressure reasons) at stopping questionable projects is something we cannot comment on--but should be studied carefully (perhaps it has been).

What's right to do to them?  From Wikimedia Commons

The things that are permitted to be done to animals are often of a kind that the animal-rights people have every reason to object to.  Not only is much done that does cause serious distress (e.g., making animals genetically transformed to develop abnormally or get disease, or surgeries of all sorts, or monitoring function intrusively in live animals), but much is done that is essentially trivial relative to the life and death of a sentient organism.  Should we personally have been allowed to study countless embryos to see how genes were used in patterning their teeth and tooth-cusps?  Our work was to understand basic genetic processes that led to complexly, nested patterning of many traits of which teeth were an accessible example.  Should students be allowed to practice procedures such as euthanizing mice who otherwise would not be killed?

The issues are daunting, because at present many things we would want to know (generally for selfish human-oriented reasons) can't really be studied except in lab animals. Humans may be irrelevant if the work is not about disease, and even for disease-related problems cell culture is, so far, only a partial substitute.  So how do you draw the line? Don't we have good reason to want to 'practice' on animals before, say, costly and rare transgenic animals are used for some procedure that may take skill and experience (even if just to minimize the animal's distress)?  With faculty careers depending on research productivity and, one must be frank, that means universities' interest in getting the grants with their overhead as well as consequent publication productivity their office can spin about, how much or how often is research on humans or animals done in ways that, really, are almost wholly about our careers, not theirs?

We raise animals, often under miserable conditions, to slaughter and eat them.  Lab animals often have protected, safe conditions until we decide to end their lives, and then we do that mostly without pain or terror to them.  They would have no life at all, no awareness experience, without our breeding them.  Where is the line to be drawn?

Similar issues apply to human subjects, even those involved in social or psychological surveys that really involve no risk except, perhaps, possible breach of confidentiality about sensitive issues related to them. And medical procedures really do need to be tested to see if they work, and working on animals can only take this so far. We may have to 'experiment' on humans in disease-related settings by exploring things we really can't promise will work, or that the test subjects will not be worse off.

More disturbing to us is that the idea that subjects are really 'informed' when they sign informed consent is inevitably far off the mark.  Subjects may be desperate, dependent on the investigator, or volunteer because they are good-willed and socially responsible, but they rarely understand the small print of their informedness, no matter how educated they are or how sincere the investigators are. More profoundly, if the investigators actually knew all the benefits and risks, they wouldn't need to do the research.  So even they themselves aren't fully 'informed'.  That's not the same as serious or draconian malpractice, and the situation is far from clear-cut, which is in a sense why some sort of review board is needed.  But how do we make sure that it works effectively, if honor is not sufficient?

What are this chimp's proper civil 'rights'?  From the linked BBC story.

Then there are questions about the more human-like animals.  Chimps have received some protections.  They are so human-like that they have been preferred or even required model systems for human problems.  We personally don't know about restrictions that may apply to other great apes. But monkeys are now being brought into the where-are-the-limits question.  A good journalistic treatment of the issue of animal 'human' rights is on today's BBC website. In some ways, this seems silly, but in many ways it is absolutely something serious to think about.  And what about cloning Neanderthals (or even mammoths)?  Where is the ethical line to be drawn?

These are serious moral issues, but morals have a tendency to be rationalized, and cruelty to be euphemized.  When and where are we being too loose, and how can we decide what is right, or at least acceptable, to do as we work through our careers, hoping to leave the world, or at least humankind, better off as a result?

Monday, May 25, 2015

QC and the limits of detection

It’s been a while since I’ve blogged about anything – things have been quite busy around SMRU and I haven't had many chances to sit down and write for fun [i] (some press about our current work here, here, here and here). 

Today I’m sitting in our home, listening to waves of bird and insect songs, and the sound of a gentle rain.  It is the end of another hot season and the beginning of another rainy season, now my 5th in a row here in Northwestern Thailand and our (my wife Amber and son Salem, and me) 3rd rainy season since moving here full time in 2013.  The rains mean that the oppressive heat will subside a bit.  They also mean that malaria season is taking off again. 


Just this last week the final chapter of my dissertation was accepted for publication in Malaria Journal.  What I’d like to do over a few short posts is to flesh out a couple of parts of this paper, hopefully to reach an audience that is interested in malaria and infectious diseases, but perhaps doesn’t have time to keep up on all things malaria or tropical medicine.  This also gives me a chance to go into more detail about specific parts of the paper that I needed to condense for a scientific paper. 


The project that I worked on for my dissertation included several study villages, made up of mostly Karen villagers, on the Thai side of the Thailand-Myanmar border. 

This particular research had several very interesting findings. 

In one of the study villages we did full blood surveys, taking blood from every villager that would participate, every 5 months, for a total of 3 surveys over 1 year.  These blood surveys included making blood smears on glass slides as well as taking blood spots on filter papers that could later be PCRd to test for malaria.  Blood smears are the gold standard of malaria detection (if you're really interested, see here and here).  A microscopist uses the slides to look for malaria parasites within the blood.  Diagnosing malaria this way requires some skill and training.  PCR is generally considered a more sensitive means of detecting malaria, but isn’t currently a realistic approach to use in field settings[ii]

Collecting blood, on both slides (for microscopic detection of malaria)
and filter papers (for PCR detection of malaria)


The glass slides were prepared with a staining chemical and immediately read by a field microscopist, someone who works at a local malaria clinic, and anyone who was diagnosed with malaria was treated.  The slides were then shipped to Bangkok, where an expert microscopist, someone who has been diagnosing malaria using this method for over 20 years and who trains other to do the same, also read through the slides.  Then, the filter papers were PCRd for malaria DNA.  In this way we could look at three different modes of diagnosing malaria – the field microscopist, the expert microscopist, and PCR. 

And basically what we found was that the field microscopist missed a whole lot of malaria.  Compared to PCR, the field microscopist missed about 90% of all cases (detecting 8 cases compared to 75 that were detected by PCR).   Even the expert microscopist missed over half of the cases (34 infections). 

What does this mean though? 

Lets start with how this happens.  To be fair, it isn’t just that the microscopists are bad at what they do.  There are at least two things at play here: one has to do with training and quality control (QC) while the other has to do with limits of detection.

Microscopy requires proper training, upkeep of that training, quality control systems, and upkeep of the materials (regents etc.)  In a field setting, all of these things can be difficult.  Mold and algae can grow inside of a microscope.  The chemicals used in microscopy, in staining the slides, etc. can go bad, and probably will more quickly under very hot and humid conditions.  In more remote areas, retraining workshops and frequent quality control testing are more difficult to accomplish and therefore less likely to happen.  There is a brain drain problem too.  Many of the most capable laboratory workers leave remote settings as soon as they have a chance (for example, if they can get better salary and benefits for doing the same job elsewhere – perhaps by becoming an “expert microscopist”?)

Regarding the second point, I think that most people would expect PCR to pick up more cases than microscopy.  In fact, there is some probability at play here.  When we prick someone’s finger and make a blood smear on a glass slide, there is a possibility that even if there are malaria parasites in that person’s blood, there won’t be any in the blood that winds up on the slide.  The same is true when we make a filter paper for doing PCR work.  However, the microscopist is unlikely to look at every single spot on the glass slide and there is also some probability at play here too.  There could be parasites on the slide, but they may only be in a far corner of the slide where the microscopist doesn’t happen to look.  These are the ones that would hopefully be picked up through PCR anyway. 

Presumably, many of these infected people had a low level of parasitemia, meaning relatively few parasites in their blood, making it more difficult to catch the infection through microscopy.  Conversely, when people have lots of parasites in their blood, it should be easier to catch regardless of the method of diagnosis.  


These issues lead to a few more points. 

Some people have very few parasites in their blood while others have many.  The common view on this is that in high transmission[iii] areas, people will be exposed to malaria very frequently during their life and will therefore build up some immunity.  These people will have immune systems that can keep malaria parasite population numbers low, and as a result should not feel as sick.  Conversely, people who aren’t frequently exposed to malaria would not be expected to develop this type of “acquired” (versus inherited or genetic) immunity.  Here in Southeast Asia, transmission is generally considered to be low – and therefore I (and others) wouldn’t normally expect high levels of acquired malaria immunity.  Why then are we finding so many people with few parasites in their blood? 

Furthermore, those with very low numbers of parasites may not know they’re infected.  In fact, even if they are tested by the local microscopist they might not be diagnosed (probably because they have a “submicroscopic” infection).  From further work we’ve been doing along these lines at SMRU and during my dissertation work, it seems that many of these people don’t have symptoms, or if they do, those symptoms aren’t very strong (that is, some are “asymptomatic”).

It also seems like this isn’t exactly a rare phenomenon and this leads to all sorts of questions:  How long can these people actually carry parasites in their blood – that is, how long does a malaria infection like this last?  In the paper I’m discussing here we found a few people with infections across multiple blood screenings.  This means it is at least possible that they had the same infection for 5 months or more (people with no symptoms, who were only diagnosed by PCR quite a few months later, were not treated for malaria infection).  Also, does a person with very few malaria parasites in her blood, with no apparent symptoms, actually have “malaria”?  If they’re not sick, should they be treated?  Should we even bother telling them that they have parasites in their blood?  Should they be counted as a malaria case in an epidemiological data system?

For that matter, what then is malaria?  Is it being infected with a Plasmodium parasite, regardless of whether or not it is bothering you?  Or do you only have malaria when you're sick with those classic malaria symptoms (periodic chills and fevers)?  

Perhaps what matters most here though is another question: Can these people transmit the disease to others?  Right now we don’t know the answer to this question.  It is not enough to only have malaria parasites in your blood – you must have a very specific life stage of the parasite present in your blood in order for an Anopheles mosquito to pick the parasite up when taking a blood meal.  The PCR methods used in this paper would not allow us to differentiate between life stages – they only tell us whether or not malaria is present.  This question should, however, be answered in future work. 




*** As always, my opinions are my own.  This post and my opinions do not necessarily reflect those of Shoklo Malaria Research Unit, Mahidol Oxford Tropical Medicine Research Unit, or the Wellcome Trust. For that matter, it is possible that absolutely no one agrees with my opinions and even that my opinions will change as I gather new experiences and information.  

    





[ii]  PCR can be expensive, can take time, and requires machinery and materials that aren’t currently practical in at least some field settings
[iii] In a high transmission area, people would have more infectious bites by mosquitoes per unit of time when compared to a low transmission area.  For example, in a low transmission area a person might only experience one infectious bite per year whereas in a high transmission area a person might have 1 infectious bite per month.  

Thursday, May 14, 2015

Coffee - a guilt-free pleasure?

A lot of research money has been spent trying to find the bad stuff that coffee does to us.  But Monday's piece by Aaron Carroll in the New York Times reviewing the literature concludes that not only is it not bad, it's protective against a lot of diseases.  If he's right, then something that's actually pleasurable isn't sinful after all!  

The piece had so many comments that the author was invited to do a follow-up, answering some of the questions readers raised.  First, Carroll reports that a large meta-analysis looking at the association between coffee and heart disease found that people who drank 3-5 cups a day were at the lowest risk of disease, while those who drank 5-10 had the same risk as those who drank none.

And, by 'coffee', Carroll notes that he means black coffee, not any of the highly caloric, fat and sugar laden drinks he then describes.  But, it can't be that all 1,270,000 people in the meta analysis drink their coffee black, so it's odd that he brings this up.  Fat and sugar are our current food demons, yes (speaking of pleasurable!), but really, does anyone have "a Large Dunkin’ Donuts frozen caramel coffee Coolatta (670 calories, 8 grams of fat, 144 grams of carbs)" (Carroll's words) 5-10 times a day?

A lifesaving cup of black coffee; Wikipedia

So, two to six cups of coffee a day are associated with lower risk of stroke, 'moderate' consumption is associated with lower risk of cardiovascular disease, in some studies (but not others) coffee seems to be associated with a lower risk of cancers, including lung cancer -- unless you're a smoker, in which case the more coffee you drink, the higher your risk.  The more coffee you drink, the lower your risk of liver disease, or of your current stage of liver disease advancing; coffee is associated with reduced risk of type 2 diabetes. And, Carroll reports, two meta-analyses found that "drinking coffee was associated with a significantly reduced chance of death." Since everyone's chance of dying is 100%, this isn't quite right -- what he means, presumably, is that it's associated with lowered risk of death at a given age, and by implication, longer life (though whether that means longer healthy life or not is unclear).

In the follow-up piece, he was asked whether this all applies to decaffeinated coffee as well.  Decaf isn't often studied, but when it is the results are often the same as for caffeinated coffee, though not always.  And sometimes true for tea as well.  So, is it the caffeine or something else in these drinks?

This is an interesting discussion, and it raises a lot of questions, but to me they have more to do with epidemiological methods than the idea that we now should all now feel free to drink as much coffee as we'd like, guilt-free (though, frankly, among 'guilty pleasures', for me coffee isn't nearly pleasurable enough to rank very high on the list!).  Indeed, after presenting all the results, Carroll notes that most of the studies were not randomized controlled trials, the gold standard of epidemiological research.  The best way to actually determine whether coffee is safe, dangerous, protective would be to compare the disease outcome of two large groups of people randomly assigned to drink 1, 2, 3....10 cups of (black) coffee a day for 20 years.  This obviously can't be done, so researchers do things like ask cancer patients, or people with diabetes or heart disease how much coffee they drank for the last x years, and compare coffee drinkers with non-drinkers.

So right off the bat, there are recall issues.  Though it's probably true that many people routinely have roughly the same number of cups of coffee every day, so the recall issues won't be as serious as, say, asking people how many times they ate broccoli in the last year.  But still, it's an issue.

More importantly, there are confounding issues, and the lung cancer association is the most obvious one.  If the risk of lung cancer goes up with the number of cups of coffee people drink a day, that's most likely because smokers have a cigarette with their coffee.  Or, have coffee with their cigarette.

Or, less obviously, perhaps people who drink a lot of coffee don't drink, I don't know, diet soda, and diet soda happens to be a risk factor for obesity, and obesity is associated with cancer risk (note: I made that up, more or less out of whole cloth, to illustrate the idea of confounding).

Una tazzina di caffè; Wikipedia

And what about the idea that decaffeinated coffee, and black but not green tea, can have the same effect?  If there really is something protective about these drinks, and we're going to get reductionist about it and finger a single component, what about water?  Could drinking 10 cups of water a day protect against liver disease, say?  True, not all the studies yield the same results, and the black but not green tea association suggests it's not the water, but not all studies show a protective affect of coffee, either.  But this idea would be easy to test -- Italians drink espresso by the teaspoon and on the run.  Do Italian studies of coffee drinking show the same protective effect?

Remember when drinking drip coffee was associated with increased cholesterol levels?  Carroll writes:
[A]s has been reported in The New York Times, two studies have shown that drinking unfiltered coffee, like Turkish coffee, can lead to increases in serum cholesterol and triglycerides. But coffee that’s been through a paper filter seems to have had the cholesterol-raising agent, known as cafestol, removed. 
High blood pressure and high cholesterol would be of concern because they can lead to heart disease or death. Drinking coffee is associated with better outcomes in those areas, and that’s what really matters.
So, high blood pressure and high cholesterol aren't in fact associated with heart disease or death?  Or, only in non-coffee drinkers? A word about methods is in order.  The results Carroll reviews are based on meta-analyses, that is, analysis that combines the results of sets of independently done studies.  As even Carroll said, some individual studies found an association between coffee and cancer at a particular site, but the effect of meta-analysis was to erase these.  That is, what showed up in a single study was no longer found when studies were combined.  In effect, this sort of pooling assumes homogeneity of causation and eliminates heterogeneity that may be present when considering individual studies, for whatever reason.  Meta-analysis allows far greater total sample studies, and for that reason has become the final word in studying causation, but it in fact can introduce its own issues.  It gains size by assuming uniformity, and that is a major assumption (almost always untested or untestable), which can amount to a pragmatic way of wishing away subtleties that may exist in the causal landscape.

I'm not arguing here that coffee is actually bad for us, or that it really isn't protective.  My point is just that these state-of-the-art studies exhibit the same methodological issues that plague epidemiological studies of asthma, or obesity, or schizophrenia, or most everything else.  Epidemiology hasn't yet told us the cause of these, or many diseases, because of confounding, because of heterogeneity of disease, because of multiple pathways to disease, because it's hard to think of all the factors that should be considered, because of biases, confounding of correlated factors, because meta-analysis has its own effects, and so on. 

One should keep in mind that last year's True Story, and many every year before that, had coffee -- a sinful pleasure by our Puritanical standards -- implicated in all sorts of problems, from heart disease to pancreatic cancer, to who knows what else.  Why should we believe today's Latest Big Finding?


Even if drinking coffee is protective to some extent, the effect can't be all that strong, or the results would have been obvious long ago.  And, the protective effects surely can't cancel out the effects of smoking, say, or overeating, or 5-10 coffee Coolatas a day.  The moral, once again, must be moderation in all things, not a reductive approach to longer life.

Wednesday, May 13, 2015

Just-So Babies

If you've ever watched a baby eat solid foods, that's DuckFace.

If you've ever seen a shirtless baby, that's DadBod.


Why are we so into these things, whatever they are, right now?


Because whether we realize it or not, they're babylike, which means they're adorable. And all things #adorbs are so #totes #squee right now for the millions (billions?) of social media users in our species. And if they're babylike, they're especially adorable to women and women are more frequently duckfaces than men. And women are increasingly open to embracing, maritally, the non-chiseled men of the world...who knew?


Well, anyone and everyone who's spent a damn second raising a baby, that's who. Especially those with mom genes.


Understanding babies, how they develop, and our connections to them while they do so is key to explaining just about everything, and perhaps literally eh-vuh-ray-thing, about humanity. 
How can I be so sure? Well aren't you?

We all know that the most attractive women are the ones that look like babies. 

source
And to help Nature out, makeup, lasers, and plastic surgery neotenize us temporarily or permanently, making our skin smooth, our eyes big, our lips pouty, our cheeks pinchable and rosy, and our noses button-y.

That stuff about beauty is common knowledge isn't it? We do these things to ourselves because of our evolved preferences for babies. We find them to be so extremely cute that this adaptive bias for babies affects much of the rest of our lives. Beauty is just the tip of the iceberg because, like I said, babies explain everything: DuckFace, DadBod, ...


And, yes, I do have more examples up my sleeve.


All that weight we gain while pregnant? You think it's to stockpile fat for growing a superhuge, supercharged baby brain both before and then after it barely escapes our bipedal pelvis?

Me and Abe, with hardly an inkling that there's still a whopping five more weeks ahead of us... suckers.
Or maybe I gained 20 pounds above and beyond the actual weight of the pregnancy so that I could protect my baby from calorie lulls from disease or food shortage, especially when those things happened more frequently to my ancestors.

Nope. And nope.


Pregnant women gain all that weight so that its lightning fast loss while lactating leaves behind a nice saggy suit of skin for the baby to grab and hold onto--not just on our bellies, but our arms and legs too. Our ancestors were dependent on this adaptation for quite a while, but over time mothers and infants became less dependent on it when they started crafting and wearing slings. Slings reduced selection on a baby's ability to grasp, you know.


Before slings, selection would have been pretty intent on favoring baby-carrying traits in both mothers and babies. For example, t
he way that our shoulder joints are oriented laterally, to the side, is unlike all the apes' shoulders which are oriented more cranially, so they're always kind of shrugging. You think we have these nice broad shoulders for swinging alongside us while running, for seriously enhancing our throwing ability, and, of course, for making stone tools? 

No. No. No.

All that's great for later in our lives, but our lateral facing shoulder joints are for being picked up and carried around while we're helpless babies. Our sturdy armpits are necessary for our early survival. And, biomechanically, those shoulder joints are oriented in the optimal way for carrying babies too. It's a win win. Combine that with the shorter hominin forearm, oh, and that itty-bitty thing called hands-free locomotion and it's obvious that we're designed to carry our babies and also to be carried as babies.


Bums come into play here too.


You probably think your big bum's for bipedal endurance running don't you? Or you might assume it evolved to give a stone-tipped spear a lot of extra oomph while impaling a wooly rhino hide.


Wrong. And wrong again.


Our big bums develop early in life because, like armpits, they build grab'n'go babies as well as well-designed grown-up baby carriers.

source
Bums plop nicely on a forearm and most certainly give babies and moms an advantage at staying together. Bums on moms (if not completely liquefied and fed to baby) steady her while holding such a load and also provide something for a baby slung on her back to sit on. Once babies lost the ability to grasp onto moms, babies' bodies had to adapt to be portable objects and moms bodies had to adapt to never drop those portable objects (at least not too far). No doubt big bums, like sturdy armpits, evolved before slings and home bases were ubiquitous in our species. 

Here's another one: The pregnancy "mask."


All those pigmentation changes that we describe as a side effect of the hormones are much more than that. Those new brown and red blotches that grow on a mother's chest and face, those are functional. They're fascinators. A mother's body makes itself more interesting and loveable for the busy, brainy baby on its way. Once we started decorating our bodies with brown and red ochre and pierced shell, bones and teeth, selection on these biological traits was relaxed. But they still persist. Why not? A human baby can't be over-fascinated, can it?


Oh, and fire. That was the best thing that ever happened to babies which means it was the best thing that ever happened to everyone living with babies. Quiet, serene, fascination, those flames... which also happen to process food for toothless babies whose exhausted, stay-at-foraging parents, would much rather swallow the food they chew up for themselves.


The baby also grows fascinators of its own. The big long hallux. Yep. Our big toe is long compared to other apes'. This is where you say it's an adaptation for bipedalism but you'd be only half right.



© naturepl.com / Ingo Arndt / WWF
The length makes it easier to reach with our mouths, as babies. And we teethe on that big toe. Imagine a world with no Sophies! That's what our ancestors had to deal with. Toes as teething toys doesn't seem so ridiculous when you remember that our long thumbs evolved for sucking.

Anyway, this long hallux was a bit unwieldy so thanks to a lucky mutation we stuck it to the rest of the foot and this turned out to work rather well for bipedalism.


Now that it's been a few minutes into this post, you must be sitting there at your computer thinking about boobs.


Yep, babies explain those too! The aesthetic preference for large breasts, by both males and females, is just nostalgia and allometry. You know how when you go back to visit your old kindergarten it looks so tiny compared to your memory? While you're a small human, you spend quite a lot of time with breasts, focused intently on them. But grow your early impression of breasts up in proportion to your adult body's sense of the world and, well, that's quite a big silicon kindergarten!


Your desires, your preferences, your tastes, your anatomy now, your anatomy when you were a baby... everything is babies, babies, babies. Even bipedalism itself.


Gestating a large fetus would not be possible if we were not bipedal. Think about it. All apes are bipedal to a significant degree. What pressured us into being habitual bipeds? Growing big fat, big-brained babies, that's what. Can you imagine a chimpanzee growing a human-sized fetus inside it and still knuckle-walking? I doubt the body could handle that. The spine alone! If you walk upright and let your pelvis help to carry that big fetus, you're golden. Obviously it worked for us.


I could go on forever! But I'll just give you one more example today. It's one you didn't see coming.


Women live longer than men, on average, and a large portion of that higher male mortality rate (at older ages) is due to trouble with the circulatory system. Well, it's obvious why. I'm looking at my arms right now and, complementing these brown and red fascinators, another part of my new mom suit is this web of ropy blue veins. Is this because my baby's sucked up all my subcutaneous fat from under my saggy skin, or... Or! Is it because my plumbing's stretched after housing and pumping about 50% more blood than normal by the third trimester. If my pipes are now, indeed, relatively larger for my blood volume and my body size then, all things being equal, that should reduce my risk of clogging and other troubles. Most women experience a term pregnancy during their lives. I'm sure this explains most if not all of the differences in mortality between men and women.


Like I said, that's just the start. And although I haven't provided evidence for many of the things I wrote, that shouldn't matter. These are just-so stories and they're terribly fun to think about. They're nothing close to approximating anything as lovely as Kipling's but they're what we humans do. If you're not a fan of today's post, hey, it's not like it passed peer review!

Tuesday, May 12, 2015

N=1 drug trials: yet another form of legerdemain?

The April 30 issue of Nature has a strange commentary ("Personalized medicine: time for one-person trials," by Nicholas Schork) arguing for a new approach to clinical trials, in which individuals are the focus of entire studies, the idea being that personalized medicine is going to be based on what works for individuals rather than on what works for the average person.  This, we would argue, shows the mental tangles and gyrations being undertaken to salvage something that is, for appropriate reasons, falling far short of expectation, and threatening big business as usual.  The author is a properly highly regarded statistical geneticist, and the underlying points are clearly made.

A major issue is that the statistical evidence shows that many important and costly drugs are now known to be effective in only a small fraction of those patients who take them.  That is shown in this figure from Schork's commentary.  For each of 10 important drugs, the blue icons are persons with positive results, the red icons are the relative number of people who do not respond successfully to the drug.


Schork calls this 'imprecision medicine', and asks how we might improve our precision.  The argument is that large-scale sampling is too vague or generic to provide focused results.  So he advocates samples of size N=1!  This seems rather weird, since you can hardly find associations that are interpretable from a single observation; did a drug actually work, or would the person's health have improved despite the drug, e.g.? But the idea is at least somewhat more sensible: it is to measure every possible little thing on one's chosen guinea pig and observe the outcome of treatment.

"N-of-1" sounds great and, like Big Data, is sure to be exploited by countless investigators to glamorize their research, make their grant applications sound deeply insightful and innovative, and draw attention to their profound scientific insights.  There are profound issues here, even if it's too much yet another PR-spinning way to promote one's research.  As Schork points out, major epidemiological research, like drug trials, uses huge samples with only very incomplete data on each subject.  His plea is for far more individually intense measurements on the subjects.  This will lead to more data on those who did or didn't respond.  But wait.....what does it mean to say 'those'?

In fact, it means that we have to pool these sorts of data to get what will amount to population samples.  Schork writes that "if done properly, claims about a person's response to an intervention could be just as well supported by a statistical analysis" as standard population-based studies. However, it boils down to replication-based methods in the end, and that means basically standard statistical assumptions.  You can check the cited reference yourself if you don't agree with our assessment.

That is, even while advocating N-of-1 approaches, the conclusion is that patterns will arise when a collection of such person-trials are looked at jointly.  In a sense, this really boils down to collecting more intense information on individuals rather than just collecting rather generic aggregates. It makes sense in that way, but it really does not get around the problem of population sampling and the statistical gerrymandering typically needed to find signals that are strong or reliable enough to be important and generalizable.

While better and more focused data may be an entirely laudable goal, if quality control and so on can in some way be ensured, but beyond this, N-of-1 seems more like a shell game or an illusion in important ways.  It's a sloganized way to get around the real truth, of causal complexity, that the scientific community (including us, of course) simply have not found adequate ways of understanding--or, if we have, then we've been dishonorably ignoring what we know in making false promises to the public who support our work and who seem to believe what scientists say.

It's a nice idea, or perhaps one should say  'nice try'?  But it really strikes one as more wishful than novel thinking, ways to keep on motoring along with the same sorts of approaches to look for associations without good theoretical or prior functional knowledge.  And, of course, it's another way to get in on the million genome, 'precision medicine'© gravy train. It's a different sort of plea for the usual view that intensified reductionism, enumeration of every scrap of data one can find, will lead to an emerging truth.  Sometimes, for sure, but how often is that likely?

We often don't have such knowledge, but whether there is or isn't a conceptually better way, rather than a kind of 'trick' to work around the problem, is the relevant question.  There will always be successes, both lucky and because of appropriately focused data.  The plea for more detailed knowledge, and treatment adjustments, for individual patients goes back to Hippocrates and should not be promoted as a new idea.  Medicine is still largely an art and still involves intuition (ask any thoughtful physician if you doubt that).

However, retrospective claims usually stress the successes, even if they are one-off rather than general, at the neglect of the lack of overall effectiveness of the approach--as an excuse to avoid facing fully up to the problem of causal complexity.  What we need is not more slogans, but better ideas, questions, more realistic expectations, or really new thinking.  The best way of generating the latter is to stop kidding ourselves by encouraging investigators, especially young investigators, to dive into the very crowded reductionist pool.

Monday, May 11, 2015

Disenfranchisement by differential death rates

reader sent us a pdf the other day of a paper with an interesting interpretation of black/white differential mortality, hoping we'd write about it.  The paper is called "Black lives matter: Differential mortality and the racial composition of the U.S. electorate, 1970–2004," Rodriguez et al.  The authors analyzed the effects of excess mortality in marginalized populations on the composition of the electorate in the US between 1970 and 2004, and conclude that mortality differentials mean fewer African American voters, and that this can turn elections, as they demonstrate for the election of 2004.

The authors used cause of death files for 73 million US deaths spanning 34 years to calculate:
(1) Total excess deaths among blacks between 1970 and 2004, (2) total hypothetical survivors to 2004, (3) the probability that survivors would have turned out to vote in 2004, (4) total black votes lost in 2004, and (5) total black votes lost by each presidential candidate.
This is straightforward demography: what was the death rate for whites in a given age group in a given year, and how does the black death rate compare?  This allows Rodriguez et al. to estimate excess deaths (relative to whites) at every age, and then, knowing the proportion of every age group that votes historically, and the proportion of those votes that go to each major party, they can estimate how many votes were lost, and how they might have changed election outcomes.  Clever, and important.
We estimate 2.7 million excess black deaths between 1970 and 2004. Of those, 1.9 million would have survived until 2004, of which over 1.7 million would have been of voting-age. We estimate that 1 million black votes were lost in 2004; of these, 900,000 votes were lost by the defeated Democratic presidential nominee. We find that many close state-level elections over the study period would likely have had different outcomes if voting age blacks had the mortality profiles of whites.  US black voting rights are also eroded through felony disenfranchisement laws and other measures that dampen the voice of the US black electorate. Systematic disenfranchisement by population group yields an electorate that is unrepresentative of the full interests of the citizenry and affects the chance that elected officials have mandates to eliminate health inequality.
Throughout the 20th century, they write, black mortality was 60% higher than white mortality, on average.  Why?  "Predominantly black neighborhoods are characterized by higher exposure to pollution, fewer recreational facilities, less pedestrian-friendly streets/sidewalks, higher costs for healthy food, and a higher marketing effort per capita by the tobacco and alcohol industries."  And, black neighborhoods have less access to medical facilities, the proportion of the black population that's uninsured is higher than whites, and exposure to daily racism takes a toll on health, among other causes.

Age distributions of the deceased by race, Rodriguez et al, 2015

The resulting differential mortality, the authors suggest, influenced many local as well as national elections in 2004.  And, with the deep, insidious effects Rodriguez et al. report, it may well be that excess mortality begets excess mortality, as blacks overwhelmingly vote Democratic, and the Republicans who win when black voter turnout isn't what it would be without the effects of differential mortality are the politicians who are less likely to support, among other things, a role for government in health care.  (To wit, all those serial Republican votes against the Affordable Care Act, and the budget passed just last week by Republicans in the Senate that would do away with the ACA entirely.)  And, it has long been known that the uninsured disproportionately die younger, and of diseases for which the insured get medical care.

"Lyndon Johnson and Martin Luther King, Jr. - Voting Rights Act" by Yoichi Okamoto - Lyndon Baines Johnson Library and Museum. Image Serial Number: A1030-17a. Wikipedia

This isn't the only form of disenfranchisement that the African American population is subject to, of course.  Felony disenfranchisement is a major cause, and, recent election cycles have seen successful attempts by multiple Republican-led state governments to "control electoral fraud" (the existence of which has been hard to impossible to prove) by limiting the voting rights of blacks, Hispanics, the young and the elderly, people who are more likely to vote Democratic.

In the May 21 New York Review of Books, Elizabeth Drew wrote a scathing review of these ongoing attacks.  It's a must-read if you're interested in the systematic, planned, fraudulent co-option of voting rights in the US.  This, coupled with the 2013 Supreme Court decision on the Voting Rights Act, along with the role of big money in politics, is having an impact on democracy in the US.  As Drew wrote,
In 2013 the Supreme Court, by a 5–4 vote, gutted the Voting Rights Act. In the case of Shelby v. Holder, the Court found unconstitutional the sections requiring that states and regions with a history of voting discrimination must submit new voting rights laws to the Justice Department for clearance before the laws could go into effect. Congressman John Lewis called such preclearance “the heart and soul” of the Voting Rights Act. No sooner did the Shelby decision come down than a number of jurisdictions rushed to adopt new restrictive voting laws in time for the 2014 elections—with Texas in the lead.
Republicans can be rightly accused of a lot of planned disenfranchisement of minority, young and elderly voters, given the extensive gerrymandering of the last 10 years or so, and the recent spate of restrictive voter ID laws.  Can they be accused of planning disenfranchisement by differential mortality?  Probably not, though it's pretty likely this report won't spur them into action to address the causes of mortality inequality.  Though, Rodriguez et al. write:
The current study findings suggest that excess black mortality has contributed to imbalances in political power and representation between blacks and whites. Politics helps determine policy, which subsequently affects the distribution of public goods and
services, including those that shape the social determinants of health, which influence disenfranchisement via excess mortality. In the United States, especially after the political realignment of the 1960s, policy prescriptions emanating from government structures and representing ideologically divergent constituencies have influenced the social determinants of health, including those that affect racial disparities. And given the critical role of elected politicians in the policy-making apparatus, the available voter pool is an essential mechanism for the distribution of interests that will ultimately be represented in the policies and programs that affect us all.
It's hard to imagine that the Democrats in power today would champion the kind of Great Society programs Johnson pushed through in the 1960's, but Obama did give us the Affordable Care Act, and whatever you think of it, it's possible that this could have an impact, even if slight, on differential black/white mortality.

As we edge further away from universal enfranchisement in this democracy, for obvious and, as Rodriguez et al. report, less obvious reasons, Ken always points out that we should step back and ask how different the kind of minority rule we've got now (in effect, 1% rule, with wealthy legislators passing laws that favor the wealthy) is from the minority rule that has been the order of the day throughout history.  It's a dispiriting thought, given that we saw a flurry of action in favor of equal rights through the latter half of the 1900's.  But equality is not a major theme in political discourse these days.

One way or another, hierarchies of power and privilege are always established.  The minority at the societal top always find ways to keep the majority down, and retain inequitable privilege for themselves.  Differential mortality of the kind reported here is just one current way that the privilege hierarchy is maintained.  Even the communists, not to mention Christians, whose formal faiths have been against inequity, are perpetrators.  Maybe there's room for optimism somewhere, but where isn't obvious. One argument is that, at least, the living conditions of those at the socioeconomic bottom (in developed countries) are better than their societal ancestors.  Of course equity is itself a human concept and not one about which there is universal agreement in favor.  But this new study is food for thought about ideal societies, and how difficult they are to achieve.

Friday, May 8, 2015

Captain FitzRoy knew best!

Robert Fitzroy (1805-1865), Captain of HMS Beagle, is a famous personage.  He's mainly famous by association with his one-time very famous passenger, the naturalist Charles Darwin on the famous voyage to map the coastline of South America.  Fitzroy was a prominent Royal Navy officer who held many other positions during his career, but he is most widely known for his religious fundamentalism and opposition to Darwin's heretical theory of evolution by natural selection.

Among other things, it was Fitzroy who had gathered a collection of Galapagos finches, that some years after the return to England, Darwin asked to borrow so that he could help flesh out his evolutionary ideas in ways that are now quite famous.  Fitzroy bitterly resented that he had helped Darwin establish this terrible idea about life.

FitzRoy has paid the price for his stubborn biblical fundamentalism, a price of ridicule.  But that is quite unfair.  Fitzroy was a sailor and his hydrographic surveys of South America on the Beagle were an important part of the Navy's desire to understand the world's oceans.  His meticulous up-and-down, back-and-forth fathoming of the ocean floor and coastline gave Darwin months of free time, to roam the adjacent territory in South America, doing his own kind of geological surveying, that led him to see physical and biogeographic patterns that accounted for the nature and origin of life's diversity.  That it was heretical to biblical thought was not Darwin's motive at the time, nor is it unreasonable to think that a believer, like FitzRoy, should easily accept this challenge to the meaning of existence lightly.




But FitzRoy, who went on to other distinguished  positions in the British government, made a contribution perhaps more innovative and at least as important as his hydrographic surveys:  he was the pioneer of formal meteorology, and of weather forecasting.

This contribution of FitzRoy's is important to me, because I was at one time a professional meteorologist (and, indeed, in Britain).  FitzRoy developed extensive material on proper weather-measuring instrumentation (weather vanes, thermometers, barometers), and ways of collecting and analyzing meteorological data.  His main objective was the safety of sailors and their ability to anticipate and avoid dangerous weather.

FitzRoy's work led to the systematic collection of weather data, the systematic understanding of the association of storms with pressure and temperature changes, the large-scale flow of air and its weather implications.  And he considered the nature of the sorts of data that, in the 1800s, could be collected.  He developed some basic forecasting rules based on these sorts of data, understood the importance of maps plotting similar observations over large areas and what happened after the time of the map, and of global wind, weather, and pressure patterns, among other things.

Cover page of FitzRoy's book.  Google digitization from UCalifornia


In 1863, just four years after Darwin had made a big stir with his Origin of Species, then Rear Admiral FitzRoy wrote a popular book, such as things were at the time, called The Weather Book: A Manual of Practical Meteorology.  This is a very clearly written book that shows the state of this new science at the time.  It was an era of inductionism--the wholesale collection of lots and lots of data--based on the Enlightenment view that from data laws of Nature would emerge.  But FitzRoy, a meticulous and careful observer and thinker, also noticed something important.  As he wrote in The Weather Book:
Objects of known importance should take precedence of any speculative or merely curious observations. However true may be the principle of accumulating facts in order to deduce laws, a reasonable line of action, a sufficiently apparent cause for accumulation, is surely necessary, lest heaps of chaff or piles of unprofitable figures should overwhelm the grain-seeker, or bewilder any one in his search after undiscovered laws. 
Definite objects, a distinct course, should be kept in mind, lest we should take infinite pains in daily registration of facts scarcely less insignificant for future purposes than our nightly dreams.
Does this perhaps ring any relevant bells about what is going on today, in many areas of science, including a central one spawned by Mendel: genetics?  Maybe we should be paying a bit more heed, rather than ridicule, to another of our Victorian antecedents.

Yes, I'm taking impish advantage of a fortuitous quote that I came across, to make a point.  But it is a valid point in today's terms nonetheless, when anything one chooses to throw in the hopper is considered 'data' and applauded.

In fact, modern meteorology is a field in which huge amounts of data collection are, indeed, quite valuable.  But there are legitimate reasons: first, patterns repeat themselves and recognizing them greatly aids forecasting, both locally and on a continental scale; second, we have a prior theory, based on hydrodynamics, into which to fit these data.  Three, using the theory and the data from the past and present, we can reasonably accurately predict specific future states.  These are advantages and characteristics hardly shared with anything comparably rigorous in genetics and evolution, where, nonetheless, raw and expensive inductionism prevails today.