4th Thursday Update

August 27, 2015


The five stages of making up imaginary stages

Apparently Google referred somebody to my site the other day because they did a search on the question, "What is the last stage of diabetes before it kills?".

I found this a little surprising. I've never heard the question put quite that way.

A lot of people ask how bad diabetes has to get before it kills you; sometimes the question is worded in a way which suggests that diabetes can be ignored until it gets to that point. I think some people assume diabetes either kills you or does nothing to you, so if you keep it just under the fatal level you'll be fine. Perhaps they imagine diabetes as a plane in a tailspin: you can wait until just before the crash before there's any need to push the eject button; until then, it's okay to do nothing.

It's unusual to hear someone talking about "the last stage" of diabetes -- we don't usually talk about diabetes having stages, the way we talk about cancer having stages. However, I imagine we will be hearing more of this kind of thing in the near future, because the world seems to have fallen in love with "stages" as the best way to comprehend issues related to physical and mental health.

The idea of breaking down an illness or personal crisis into "stages" seems to have a powerful appeal to people. It makes them feel as if they can cope with a challenging situation, no matter how confused and powerless it might make them feel at first, because the territory has been charted by other people who have already been through it. We don't have to be in a panic because of our fear of the unknown. If we know what "stages" we're going to have to go through, and what's going to happen during each of those stages, at least we'll be prepared. It's a comfort to think that, when we get to Stage 3, we'll understand what Stage 3 is and we'll know what to do.

This sort of thing can be useful when each of the "stages" really does relate to some kind of well-understood phenomenon of the real world. (What first-time mother wouldn't want to know what to expect during the various stages of pregnancy, for example?) But the urge to break down an illness, a crisis, or a lifetime into "stages" (which occur on schedule, in the proper sequence, and are more or less the same for everybody), may be unrealistic. Often it is just an attempt to impose an illusion of order on the messy, unpredictable realities of life.

A good example of what I have in mind is the "five stages of grief" concept, which Elisabeth Kubler-Ross presented to the world in 1969. Initially she was writing about terminally ill people dealing with feelings about their own deaths, but somehow the main focus of the thing was transferred from dying people to their survivors: most people think the five stages refer to bereavement, not dying. Anyway, the concept became extremely popular and widespread, and it dominates our thinking about the grieving process -- even though, 35 years later, Kubler-Ross herself repudiated much of what people believed about it. In 2004, not long before her own death, she said that those famous "stages" (denial, anger, bargaining, depression, and acceptance) were really aspects of grieving, not stages of grieving; they didn't occur in a fixed order, they often overlapped in time, and not everyone experienced all of them.

Her partial recantation came too late; the five stages are so firmly established in popular culture, and especially in the thinking of psychologists and grief counselors, that any bereaved person who goes in for counseling is in danger of being force-fit into one of the five stages, even if they themselves don't think the categorization is accurate or is justified by anything they said. The "anger" stage is especially tricky, because if you're told you're in the "anger" stage and you protest that you're not, the argument itself will eventually make you angry enough for your counselor to feel smugly confident that he called it right. (Regarding "anger", it's worth noting that Kubler-Ross eventually admitted she'd had a lifelong struggle with an unresolved anger issue from her youth. Well, she was hardly the first psychiatrist to see one of her own idiosyncrasies as a universal aspect of human nature!)

The five stages concept isn't entirely wrong (it points to a combination of feelings which many grieving people really do experience), but by defining these five aspects of grief as "stages", it creates the false expectation that each stage is time-limited and will conclude on schedule. Grieving people are left to blame themselves for being "stuck" in one of the stages because it happens to describe a feeling that persists for them. I know a widower who convinced himself he had made great strides and had progressed through all the stages, then felt like a weakling and a failure when he realized he wasn't anywhere near done. Surely the "five stages" concept, at least as most people understand it, is of dubious value if it often makes bereaved people feel worse because it presents as universal (or at least "normal") a pattern which they don't fit.

The "stages" concept in general -- the urge to categorize the aspects of any complex situation as a sequence of distinct phases -- often borders on the purely fictional. It is usually an example of what's called "reification". To treat a purely abstract concept as if it were an actual thing existing in the world is to "reify" it -- literally to make it real, or more honestly to treat it as if it were real.

A notorious example of reification is "IQ": a screening test for childhood dyslexia, irrelevant to adults by definition. It became reified, and is now popularly assumed to test a real feature of the brain which somehow limits the intellectual capacity of an individual -- forever. There's no getting rid of this dangerous concept now: once an abstract idea become reified and people become convinced that it "exists" in the real world, they never question it again. If someone asks you what your IQ is, and you say "I don't know -- I've never had it tested", your use of the word "it" is enough to show that you assume an IQ is a real thing and that you have one of those things, even if "it" has not been evaluated yet. All of that is probably bilge, but few people question it.

People are similarly uncritical of Freud's concept of the mind consisting of a separate id, ego, and superego. Of course we have feelings for which each of those entities would be the appropriate origin -- but does it mean anything to say those entities "exist", just because our feelings exist? Aren't they simply emotional categories, which have been given names as if they were real objects? Neurologists haven't found the id in brain scans, you know.

Freud wasn't the only one ever to propose dividing the self into separate entities with separate functions; the "chakra" tradition from India is similar (and just as impossible to verify). To say that carnal desires arise from the id is not different from saying they arise from the "Muladhara" chakra -- or, if there's a difference, I don't see it. I don't think Freud had any better evidence for his multiple-entity model of the mind than the Indian mystics had for theirs; I think he just did a better selling job. Whether you believe in the id or the Muladhara, you're allowing somebody to reify an abstract concept into existence, and agreeing to regard it as a living reality.

The stages of grief seem to me to be just that type of thing: an abstract concept which we have agreed to regard as a living reality, even if it requires us to edit reality to make room for it. For example, if someone says they never experienced a "denial" stage, we assume they are in denial about their denial; after all, denial is the first of the stages, so they must have gone though the denial stage even if they're lying about it now! But there are indications that it's a serious burden on grieving people that their counselors won't listen to them about how they feel, if their feelings don't match the five-stage program. Maybe we should do grieving people the courtesy of listening to what they say they're actually going through.

Regarding the course of a chronic illness, it is sometimes a valid and useful thing to talk about the "stages" of the illness and the challenges patients should expect to face during those stages. Cancer tends to be that kind of condition.

Diabetes, however, does not tend to be that kind of condition. I doubt it is useful to think of diabetes as having "stages" that are different in kind from one another. I would advise diabetes patients against thinking in terms of what "stage" they are in and how hopeless their situation therefore is. Most people seem to think that pre-diabetes is a doctor's way of saying "no big deal", while diabetes is a doctor's way of saying "you're doomed". Neither interpretation is valid.

To ask "what is the last stage of diabetes before it kills" is like asking what the last stage of rust is before it breaks an iron chain that's been out in the weather too long. Uncontrolled diabetes, like rust, does not occur in distinct stages; it is a slow, gradual accumulation of damage. There is no predicting when the damage will reach a point of catastrophic failure -- and there is no use pretending everything is going fine until that point is reached.

There are no natural "stages" in such a gradual, cumulative process. Even the supposed transitions from non-diabetic to pre-diabetic to just-plain-diabetic are abstract, artificial "events" -- they are defined not by any qualitative difference between people in those categories, but by arbitrary numerical thresholds. Those thresholds could be changed (and indeed have been) simply because the health-care industry decides to change them. A fasting test of 126 mg/dl is now considered diagnostic of diabetes, but the number used to be 140; the goalpost was moved once, and might be moved again. Some people think it should be moved again: 126 is still too high. The last time the goalpost was moved, a lot of people "became" diabetic overnight, but their health didn't change, the rulebook did. The same thing will happen if the diagnostic threshold is reduced again.

If it is misleading to speak of diabetes having "stages", it is also (to a degree) misleading to speak of diabetes as something that kills. In theory diabetes can be a direct cause of death, but that's not what usually happens. Abnormally high blood sugar, if it continues over a long period, results in a slow accumulation of damage to tissues throughout the body. A lot of this damage is the result of "glycation", the process by which protein molecules become dysfunctional because they are gummed-up by a sugary coating. This process injures nerves (leading not only to phantom pain but to a dangerous loss of sensation), and it also injures blood vessels (leading to circulatory problems, eye problems, kidney problems, coronary heart disease, and strokes).

When a diabetes patient dies, the death certificate usually lists a heart attack as the direct cause, not "diabetes" (even though the heart attack probably could have been prevented, or at least delayed for years, if the patient's diabetes and general health had been better maintained). It is important to keep in mind that the "progression" of diabetes is the cumulative effect of years of poor glycemic control -- it is not an unavoidable journey through a set of "stages" we all must endure.

You don't have to let yourself get that rusty. There are ways to combat rust. There are things you can do about this situation, honest! But you won't do them, if you let yourself get too sold on the idea that diabetes has "stages" and you can't avoid going through them, just like everybody else.


Another take...

Zach Weiner thought of another way to explain the famous five stages of grief -- probably more entertainingly than I was able to do it...


3rd Thursday Update

August 20, 2015

Sorry about the computer mishap which caused my August blogs to depart for a galaxy far, far away. It couldn't be helped. Or anyway, I couldn't help it. I hope I do better this week!


Journey to the End of the Sentence!

Keeping an eye on the science news as it pertains to diabetes requires a little bit of patience in terms of deciphering technical language. Most people, I imagine, look at a sentence such as this one and immediately give up on making sense of it:

Let us go on a thrilling and adventurous journey through this sentence, and see if its many challenges can be overcome! (Spoiler alert: it will turn out to mean something.)

It might appear to you that, in saying "Our data indicate that the FTO allele associated with obesity represses mitochondrial thermogenesis in adipocyte precursor cells in a tissue-autonomous manner," Dr Claussnitzer and colleagues are indulging in language so hopelessly obscure that there is no possibility of the average citizen figuring out what they're talking about. In years past that might have been true, but search engines have made it reasonably quick and easy to find out what technical terms mean, and even to figure out how they relate to one another within a sentence. Believe it or not, you too can dive into a sentence like that, work your way to the end of it, and emerge with a useful understanding of what is being reported. For the sake of encouraging you, I will take you on a guided tour of the sentence quoted above, to show you how much can be learned from it, even though it seems to be pure gibberish. The terms you probably need help with are:

I did a little internet-searching on those terms, and on the related terms that came up in connection with them, and I'm ready to share what I found out.

Let's take this much of the sentence to start with: "Our data indicate that the FTO allele..." What is an FTO allele, you ask? Well, it's easier to start with the word allele by itself. An allele (short for "allelomorph", meaning "other form") is any of the alternative forms of a particular gene. Like a sweatshirt which comes in various colors, a gene comes in various alleles (with tiny genetic difference between them).

A gene is a specific piece of code, located at a specific position along the length of a chromosome (a chromosome is a very long strand of DNA, coiled up tightly to fit it within the cell nucleus). Because humans have paired chromosomes, they inherit one copy of a given gene from each parent, and they typically inherit a different variant of that gene from each parent -- whatever versions of that gene you inherit are called alleles of that gene. When you hear about inheriting a gene that causes a disease, "gene" in that case really means an allele of a gene -- a gene which wouldn't cause a problem, if you had been lucky enough to inherit a different allele of it. All humans get the same list of genes, but each of us gets a different combination of alleles for those genes. Strictly speaking, genes don't vary from person to person -- alleles do.

Okay, so what is an FTO allele? Well, FTO is the name of a particular gene, and an FTO allele is a particular variant of the FTO gene.

The human species has more than 20,000 genes, and each one of them has to be called something. Genes are identified by acronyms of three or more letters. Although the letters do stand for something, they often stand for something silly -- an arbitrary phrase which geneticists used as a nickname for a gene when it was newly discovered and its function was not well understood. The gene known as FTO has a particularly silly origin. It was discovered during investigation of a mutation in mice called FT (for fixed toes). The genes discovered during this research on FT were given nicknames that included the letters FT. One of the genes was unusually large, so it was nicknamed "Fatso" (abbreviated FTO). Apparently all this happened before anyone understood that this gene regulated body fat and that one allele of the gene was associated with obesity. In other words, the apparent mean-spiritedness of the designation was a result of coincidence; geneticists were saying the gene itself was fat, not that the people carrying the wrong allele of it were fat.

Okay, back to the sentence: "...the FTO allele associated with obesity represses mitochondrial thermogenesis..." Uh-oh, sounds like trouble. The allele of the FTO gene that is associated with obesity represses mitchondrial thermogenesis! The mitochrondria are tiny structures within our cells which process chemical energy -- they are sometimes referred to as cellular "power plants". One thing mitochondria are good at is burning chemical energy to generate body heat -- a process called thermogenesis. But it seems that this process is repressed, if you have the wrong allele of the FTO gene!

Where exactly is this process repressed? In "...adipocyte precursor cells". An adipocyte is a fat cell (body fat is sometimes called "adipose tissue"). Precursor cells are stem cells that have developed to the point where they are committed to becoming a particular type of cell; an adipocyte precursor cell is a stem cell that develops into a fat cell. And having the wrong allele of the FTO gene causes the mitochondria in those cells not to burn chemical energy to generate body heat. And why does that matter? Because, if a fat cell doesn't burn chemical fuel to generate body heat, it turns the chemical energy into fat and stores it.

Back to the sentence "...in a tissue-autonomous manner". This means that a body tissue is more or less acting on its own initiative, rather than being regulated by hormones or other control mechanisms of the body. (The worst form of tissue-autonomous behavior is cancer: a tumor is a tissue that is growing independently of the body's normal mechanisms for regulating growth.)

So, we've already reached the end of the sentence, which can be summarized as follows: some people carry a variant of the FTO gene (a variant which is associated with obesity); in these people, fat cells develop in such a way that they ignore regulatory signals which should make them burn fat -- resulting in more and more fat being stored instead of used.

There -- that wasn't so hard, was it?

That daunting sentence comes from a report on research from MIT which uncovered the basis (or one possible basis) of genetically-driven obesity. The reason people are excited about this, of course, is that it gives medical researchers a genetic "target" for drugs to combat obesity. The researchers have already found that they can counteract the effect of the obesity-related FTO allele within cells. Of course, what works in a Petri dish doesn't always work within a living human, but it's a start.


Seeing what we expect to see

During the recent NASA flyby of Pluto, some news outlets reminded us of the "controversy" about Pluto losing its status as a planet. It's a ridiculous, unimportant story, yet people seem to think it matters. I'm going to talk about it because I think it illustrates a larger point about science -- including health science.

What happened was that astronomers decided Pluto had been incorrectly classified as a planet when it was discovered in 1930, because it was then assumed to be a much larger object than it actually is. The word planet does have a scientific definition, and Pluto fails to meet all the criteria for that definition (it isn't big enough to have achieved "gravitational dominance" -- it hasn't been able to clear its orbit of debris). Pluto is not only smaller than the solar system's other planets, it's also smaller than some of the solar system's moons. So, it was placed in the category of "dwarf planets" (and it isn't even the largest of those).

This development was reported by science journalists as if it were a scandal. People who had been taught in grade school to memorize the names of the planets, and to include Pluto among them, felt betrayed by this. They felt sorry for Pluto, and accused astronomers of treating the little planet shabbily. Journalists consistently said that astronomers had "demoted" Pluto (as if they had somehow given it a pay cut).

It was a pretty silly thing for people to get upset about. Not everything that orbits the sun is a planet. We don't call comets and asteroids planets. If Pluto has been found not to meet the definition of a planet, the obvious thing to do is to stop calling it a planet.

The interesting question to ask about Pluto is not why astronomers stopped calling it a planet, but why astronomers ever started calling it a planet in the first place. Yes, they thought it was much bigger when it was newly discovered -- but why did they think that?

When Clyde Tombaugh discovered Pluto in 1930, it was a tiny speck of light on a photograph of a star field. It was extremely faint -- much, much fainter than any other planet. If it were a large object, shouldn't it have been more conspicuous than that? Admittedly, it was over 4 billion miles away, and no telescope at that time could show it as a disk with a measurable width -- it just looked like a dimensionless point. But even at that distance it should have been brighter than it was, if it was as large as astronomers were claiming it to be.

In order to defend the claim that Pluto was a large planet, astronomers came up with a rationalization: Pluto was large, but extremely dark in color. It was coal-black, reflecting hardly any of the light that fell on it. That's why it was so hard to see, even with a large telescope. However, the other planets in the outer solar system were not black; they were made of frozen gases and they were highly reflective. Why should Pluto be black? It should have been obvious that astronomers were making excuses here, in order to cling to an improbable claim that Pluto was large.

Over the years, evidence was collected which gave rough indications of Pluto's size. Better telescopic observations seemed to show some width to Pluto (but this was misleading -- Pluto has a moon called Charon, and the two objects were blurring into one which looked wide because of the distance between them). Then astronomers made careful measurements of the time it took for a star to reappear after Pluto drifted in front of it. As more and more data was gathered, the estimated size of the planet grew smaller and smaller. (Isaac Asimov called it The Incredible Shrinking Planet.) We now know that it's far smaller than the earth (and earth is one of the smaller planets). Even Texas gives Pluto a run for its money.

So why had astronomers been so determined to think of Pluto as a large planet? Because they found Pluto when they were looking for a large planet, and they naturally assumed Pluto was it. Scientists, like anyone else, have a natural tendency to assume that whatever turns up during a search must be the thing they were looking for. Expectation dictates observation: we see whatever we imagined we were going to see. And when we look at evidence, we evaluate it as valid and important only if it confirms what we already think is true. The faintness of Pluto would have made astronomers think it was small, under different circumstances, but the actual circumstances were that they thought Pluto was large and they wanted to believe it was large -- therefore, evidence for Pluto being small should be devalued.

The reason astronomers had been looking for a large planet in the outer solar system is that they thought they'd detected irregularities in the orbit of Neptune,which could be explained if there was a large planet in the neighborhood exerting a gravitational influence. Neptune is a very large planet (much larger than the earth), so if there was a planet out there ("Planet X") capable of disturbing Neptune's orbit, it would have to be a big planet, too.

So, they went looking for a big planet beyond Neptune, and Pluto was what they found, therefore Pluto must be large. The logic is not impeccable, but they didn't stop to think about that. (Scientists are human, too.)

If Pluto wasn't large enough to be disturbing Neptune's orbit, and therefore couldn't be Planet X, then where was the missing large planet? What exactly was disturbing Neptune's orbit?

Nothing was, as it turned out. The supposed irregularities of Neptune's orbit were an artifact of slightly inaccurate observatory data. Problems with the calibration of a telescope used to collect data on Neptune had resulted in an inaccurate estimate of Neptune's mass, and therefore in an incorrect calculation of what its orbit should be. If you plugged more modern and accurate data into the equations, Neptune's orbit was not disturbed; it had no irregularities which required a Planet X to explain. Astronomers had been looking for a large planet that wasn't there. When they accidentally stumbled upon a dwarf planet that was there, they tried to convince themselves it was a large planet, because that's what they had expected to find.

To be fair, astronomers did concede that Pluto was small, once the evidence for that became too strong to be dismissed. Still, it continued to be called a planet for a long time. And when astronomers finally got around to saying it wasn't a planet, the general public thought this was outrageous. In science, it's not a bad thing to admit when you're wrong -- in fact, that sort of self-correcting operation is what distinguishes science from less intellectually rigorous pursuits, such as politics. But the public objects to this kind of self-correction for some reason, and thinks it's a scientific scandal whenever theories are revised to fit new data.

Where science has a direct impact on the general public (for example, science related to the question of what a "healthy diet" is), it can be extremely difficult for scientists to revise their theories to fit new data. Medical researchers, and public health officials, are extremely reluctant to revise any recommendations they have made to the public about healthy living. They feel that they've gone out on a limb with their clincial guidelines and recommendations, and they can't be seen as flip-flopping now. (It's appropriate that the political world, not the scientific world, gave us the derogatory term "flip-flop" for any change of position, however reasonable it might be.) As a result, when evidence starts piling up which casts doubt on a broadly-accepted health recommendation, medical associations and public health officials tend to dismiss the evidence. (They already committed themselves to a position on the issue, so it would be too embarrassing to re-examine the matter in light of new evidence.)

An example of this phenomenon has been the decades-long assumption that cardiovascular disease is driven largely by saturated fat in the diet. (Saturated fats are primarily animal fats -- the kind of fats which, unlike unsaturated vegetable oils, tend to be solid rather than liquid at room temperature.) The public has been told for years (actually, for generations) that you should carefully restrict saturated fat and cholesterol in the diet, if you want to avoid heart disease. Stop eating bacon and eggs for breakfast -- have oatmeal instead!

This idea became accepted through the vigorous advocacy of one highly influential nutritionist, Ancel Keys. His evidence for the idea that saturated fat causes heart disease consisted of research comparing the diets and heart disease rates in different countries, and finding that the countries with high rates of saturated fat consumption also had high rates of heart disease.

One tiny flaw in Keys's approach was that there are many differences between countries other their differing consumption of saturated fat, so there could many possible reaasons besides diet why one country had more heart disease than another. Also, Keys had data from 21 countries, not 7. He threw out the ones that didn't support the correlation he was looking for, between saturated fat and heart disease.

Like the astronomers who expected Pluto to be a large planet, and disregarded any evidence that it wasn't, Keys expected saturated fat consumption to correlate with heart disease, and disregarded any evidence that it didn't.

Various recent studies have found that saturated fat does not correleate with cardiovascular disease. This study found that "saturated fats are not associated with an increased risk of death, heart disease, stroke, or Type 2 diabetes". The study mentions that its finding confirms the findings of "five previous systematic reviews". The study did find that trans fats (artificially hydrogenated oils) are correlated clearly with cardiovascular disease, but saturated fats (the more widely-feared dietary bogey-man) are not.

However, these studies are having no impact on public health policy. Medical organizations are not changing their advice to the public. They are too committed to this saturated-fat-will-kill-you concept to abandon it. They don't want to admit they've been getting this wrong for decades.

And I know why: they saw what happened to astronomers when they admitted they were wrong about Pluto.

Older Posts:

July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
Jan/Feb 2008