Aletho News

ΑΛΗΘΩΣ

Pre-traumatic stress syndrome: Climate trauma survival tips

By Judith Curry | Climate Etc. | October 29, 2014

Climate depression is real. Just ask a scientist. – Madeleine Thomas

Wow. Here I was, hard at work on my long promised post on clouds, and I spotted this article on twitter. Much easier to write about this one than about clouds.

At Grist, Madeleine Thomas has penned an article Climate depression is real.  Just ask a scientist. Excerpts:

Two years ago, Camille Parmesan, a professor at Plymouth University and the University of Texas at Austin, became so “professionally depressed” that she questioned abandoning her research in climate change entirely.

“I felt like here was this huge signal I was finding and no one was paying attention to it,” Parmesan says. “I was really thinking, ‘Why am I doing this?’” She ultimately packed up her life here in the States and moved to her husband’s native United Kingdom.

“In the U.S., [climate change] isn’t well-supported by the funding system, and when I give public talks in the U.S., I have to devote the first half of the talk to [the topic] that climate change is really happening,” says Parmesan, now a professor at Plymouth University in England.

From depression to substance abuse to suicide and post-traumatic stress disorder, growing bodies of research in the relatively new field of psychology of global warming suggest that climate change will take a pretty heavy toll on the human psyche as storms become more destructive and droughts more prolonged.

“I don’t know of a single scientist that’s not having an emotional reaction to what is being lost,” Parmesan is quoted saying in the National Wildlife Federation’s 2012 report, “The Psychological Effects of Global Warming on the United States: And Why the U.S. Mental Health Care System is Not Adequately Prepared.”

Lise Van Susteren, a forensic psychiatrist based in Washington, D.C. — and co-author of the National Wildlife Federation’s report — calls this emotional reaction “pre-traumatic stress disorder,” a term she coined to describe the mental anguish that results from preparing for the worst, before it actually happens.

What’s even more deflating for a climate scientist is when sounding the alarm on climatic catastrophes seems to fall on deaf ears. “How would that make you feel? You take this information to someone and they say they don’t believe you, as if it’s a question of beliefs,” says Jeffrey Kiehl, senior scientist for climate change research at the National Center for Atmospheric Research in Boulder. “I’m not talking about religion here, I’m talking about facts.”

“I could imagine that if scientists start to talk about how they’re feeling about the issue and how emotional they’re feeling about the issue, those who are critical about climate change would seize that information and use it in any way they could to say that we should reject their science,” Kiehl says.

It’s only natural then that many climate scientists and activists often feel an extreme pressure to keep their emotions in check, even when out of the spotlight.

“You don’t just start talking about unbelievably fast sea-level rise at a cocktail party at a friend’s house,” Tidwell says. “So having to deny the emotional need to talk about what’s on your mind all the time … those are some of the burdens that climate aware scientists and activists have to endure.  But nobody talks about how it makes them feel personally.”

So how does a climate scientist handle the stress? Van Susteren offers several “climate trauma survival tips” for those in the field. Meditation and therapy are two, as are taking particular care to reinforce boundaries between work and one’s personal life. But she also says being honest is just as important. “[Don’t] believe that you are invulnerable,” she writes. “In fact, admitting what you are going through makes you more resilient.”

And a dose of honesty may be more than just therapeutic. Some real talk about how we’re all screwed may be just what the climate movement needs. “Forgive my language here, but if scientists are looking for a clearer language to express the urgency of climate change, there’s no clearer word that expresses that urgency than FUCK,” [Grist’s Brentin] Mock writes.

Perhaps it’s time for those deeply involved in climate science to come forward about the emotional struggle, or at the very least, for those in mental health research and support to start exploring climate change psychology with more fervor.

“There’s a taboo talking about it,” Lertzman says, adding that the tight-lipped culture of the scientific community can be difficult to bridge. “The field of the psychology of climate change is still very, very young … I believe there are profound and not well-recognized or understood psychological implications of what I would call being a frontliner. There needs to be a lot more attention given to frontliners and where they’re given support.”

JC reflections

Oh my, where to start with this one.  Lets try this:

I feel your pain.  Circa 2007 I felt the same way you did, and ran around turning off lights and unplugging things, feeling really uncomfortable about the carbon footprint of myself and my surroundings.  But then I woke up as a scientist and realized that my belief in dangerous anthropogenic climate change was second order belief –  based on the IPCC consensus.  That is, I believed in the consensus without having done a real detailed assessment of my own.  Then when climategate triggered me to closely examine everything, notably the IPCC’s attribution argument, I realized that the fingerprints were ‘muddy’, the climate models are running too hot, the forcing data is uncertain, no account is made for multidecadal and longer internal variability, and they have no explanation for the warming 1910-1940, the cooling 1940-1976, and the hiatus since 1998.  Once you raise questions about 20th century attribution, then your angst about impacts that you think are attributable to AGW becomes much less justified.

In terms of tips, try reading some literature on history, philosophy and sociology of science – you will become more humble as a scientist and less likely to believe your own hype.  Read Richard Feynman.  Hang out at Climate Etc.  Listen seriously to a serious skeptic.

If these strategies don’t work, try learning about aberrant psychologies, such as the God complex and paranoia and look in the mirror (there are probably others, but I don’t know that much psychology myself).

And also inform yourself about psychological hardiness (something I learned from days at U. Chicago and hanging out with grad students in Salvatore Maddi’s group).  Excerpt from Wikipedia:

The coping style most commonly associated with hardiness is that of transformational coping, an optimistic style of coping that transforms stressful events into less stressful ones. At the cognitive level this involves setting the event into a broader perspective in which they do not seem so terrible after all. At the level of action, individuals high in hardiness are believed to react to stressful events by increasing their interaction with them, trying to turn them into an advantage and opportunity for growth, and in the process achieve some greater understanding.

The ‘pre-traumatic stress’ thing clicked a link in my mind to my old U. of Chicago pal Colonel Paul Bartone, a military psychologist and a member of the hardiness group.  The following paper seems relevant:  A Model for Soldier Psychological Adaptation in Peacekeeping Operations.  I think these concepts are relevant for what is going on with Parmesan et al.  Seems like skeptics are more hardy?

The psychology of all this is probably pretty interesting, and worthy of more investigation.   But Jeff Kiehl is right – whining scientists aren’t going to help either the science or their ’cause.’

October 30, 2014 Posted by | Science and Pseudo-Science, Timeless or most popular | , , , , , | Leave a comment

Number of UK Afghan war veterans seeking mental help doubles in a year

RT | May 12, 2014

There has been a “significant increase” in the number of UK Afghanistan veterans seeking treatment for mental disorders, a charity has said. The number is likely to rise as the British military prepares to withdraw from the country this year.

The charity Combat Stress has released new statistics to the British press on the number of UK war veterans seeking help for mental trauma. It documents a 57 percent rise in referrals in 2013 of veterans who have served in the Afghanistan conflict.

There were over 358 cases last year, in comparison with 228 referrals for Afghanistan-related mental trauma in 2012. At the moment, the charity is supporting over 660 Afghanistan veterans, but the organization expects the number to rise with the full withdrawal of US-led NATO troops scheduled for the end of this year.

According to the charity’s research, most veterans do not usually seek mental help until over a decade after serving in the army. However, in the case of Afghanistan veterans, the charity has found the average time lag has fallen as low as 18 months.

Commodore Andrew Cameron, the chief executive of Combat Stress, told The Guardian newspaper that mental disorders take time to present themselves, and as such the UK should be ready for a dramatic increase of cases off the back of the 13-year Afghan conflict.

“These statistics show that, although the Iraq war ended in 2011 and troops are withdrawing from Afghanistan later this year, a significant number of veterans who serve in the armed forces continue to relive the horrors they experienced on the front line or during their time in the armed forces,” Cameron said.

Combat Stress estimates that a large proportion of the 42,000 people who served in conflicts in Afghanistan and Iraq may develop some form of mental disorder in the coming decade. Conditions range from post-traumatic stress disorder to depression, and the veterans’ struggle against these disorders can “tear families apart,” Cameron said.

The charity says that even now it is still taking on cases from veterans of the Falklands War (1982) and the Gulf war (1990-1991).

According to figures by the BBC at least 453 members of the UK Armed Forces have been killed in Afghanistan since the US-led NATO invasion in 2001. The last of the alliance forces stationed in the country at set to be withdrawn at the end of this year.

However, Washington is pushing for a security pact to be signed by the Afghan government that will allow for a contingent of troops to remain in Afghanistan to aid in the security effort after alliance troops pull out.

Outgoing Afghan President Hamid Karzai has refused to sign the pact, but presidential elections were held this year in April and both the frontrunners have said they are prepared to put pen to paper on the deal.

May 12, 2014 Posted by | Illegal Occupation, Militarism, Subjugation - Torture, Timeless or most popular, War Crimes | , , , , , , , | Leave a comment

The Wholesale Sedation of America’s Youth

By Andrew M. Weiss | Skeptical Inquiry | May 6, 2009

In 1950, approximately 7,500 children in the United States were diagnosed with mental disorders. That number is at least eight million today, and most receive some form of medication.

Is this progress or child abuse?

In the winter of 2000, the Journal of the American Medical Association published the results of a study indicating that 200,000 two-to four-year-olds had been prescribed Ritalin for an “attention disorder” from 1991 to 1995. Judging by the response, the image of hundreds of thousands of mothers grinding up stimulants to put into the sippy cups of their preschoolers was apparently not a pretty one. Most national magazines and newspapers covered the story; some even expressed dismay or outrage at this exacerbation of what already seemed like a juggernaut of hyper-medicalizing childhood. The public reaction, however, was tame; the medical community, after a moment’s pause, continued unfazed. Today, the total toddler count is well past one million, and influential psychiatrists have insisted that mental health prescriptions are appropriate for children as young as twelve months. For the pharmaceutical companies, this is progress.

In 1995, 2,357,833 children were diagnosed with ADHD (Woodwell 1997)—twice the number diagnosed in 1990. By 1999, 3.4 percent of all American children had received a stimulant prescription for an attention disorder. Today, that number is closer to ten percent. Stimulants aren’t the only drugs being given out like candy to our children. A variety of other psychotropics like antidepressants, antipsychotics, and sedatives are finding their way into babies’ medicine cabinets in large numbers. In fact, the worldwide market for these drugs is growing at a rate of ten percent a year, $20.7 billion in sales of antipsychotics alone (for 2007, IMSHealth 2008).

While the sheer volume of psychotropics being prescribed for children might, in and of itself, produce alarm, there has not been a substantial backlash against drug use in large part because of the widespread perception that “medically authorized” drugs must be safe. Yet, there is considerable evidence that psychoactive drugs do not take second place to other controlled pharmaceuticals in carrying grave and substantial risks. All classes of psychoactive drugs are associated with patient deaths, and each produces serious side effects, some of which are life-threatening.

In 2005, researchers analyzed data from 250,000 patients in the Netherlands and concluded that “we can be reasonably sure that antipsychotics are associated in something like a threefold increase in sudden cardiac death, and perhaps that older antipsychotics may be worse” (Straus et al. 2004). In 2007, the FDA chose to beef up its black box warning (reserved for substances that represent the most serious danger to the public) against antidepressants concluding, “the trend across age groups toward an association between antidepressants and suicidality . . . was convincing, particularly when superimposed on earlier analyses of data on adolescents from randomized, controlled trials” (Friedman and Leon 2007). Antidepressants have been banned for use with children in the UK since 2003. According to a confidential FDA report, prolonged administration of amphetamines (the standard treatment for ADD and ADHD) “may lead to drug dependence and must be avoided.” They further reported that “misuse of amphetamine may cause sudden death and serious cardiovascular adverse events” (Food and Drug Administration 2005). The risk of fatal toxicity from lithium carbonate, a not uncommon treatment for bipolar disorder, has been well documented since the 1950s. Incidents of fatal seizures from sedative-hypnotics, especially when mixed with alcohol, have been recorded since the 1920s.

Psychotropics carry nonfatal risks as well. Physical dependence and severe withdrawal symptoms are associated with virtually all psychoactive drugs. Psychological addiction is axiomatic. Concomitant side effects range from unpleasant to devastating, including: insulin resistance, narcolepsy, tardive dyskenisia (a movement disorder affecting 15–20 percent of antipsychotic patients where there are uncontrolled facial movements and sometimes jerking or twisting movements of other body parts), agranulocytosis (a reduction in white blood cells, which is life threatening), accelerated appetite, vomiting, allergic reactions, uncontrolled blinking, slurred speech, diabetes, balance irregularities, irregular heartbeat, chest pain, sleep disorders, fever, and severe headaches. The attempt to control these side effects has resulted in many children taking as many as eight additional drugs every day, but in many cases, this has only compounded the problem. Each “helper” drug produces unwanted side effects of its own.

The child drug market has also spawned a vigorous black market in high schools and colleges, particularly for stimulants. Students have learned to fake the symptoms of ADD in order to obtain amphetamine prescriptions that are subsequently sold to fellow students. Such “shopping” for prescription drugs has even spawned a new verb. The practice is commonly called “pharming.” A 2005 report from the Partnership for a Drug Free America, based on a survey of more than 7,300 teenagers, found one in ten teenagers, or 2.3 million young people, had tried prescription stimulants without a doctor’s order, and 29 percent of those surveyed said they had close friends who have abused prescription stimulants.

In a larger sense, the whole undertaking has had the disturbing effect of making drug use an accepted part of childhood. Few cultures anywhere on earth and anytime in the past have been so willing to provide stimulants and sedative-hypnotics to their offspring, especially at such tender ages. An entire generation of young people has been brought up to believe that drug-seeking behavior is both rational and respectable and that most psychological problems have a pharmacological solution. With the ubiquity of psychotropics, children now have the means, opportunity, example, and encouragement to develop a lifelong habit of self-medicating.

Common population estimates include at least eight million children, ages two to eighteen, receiving prescriptions for ADD, ADHD, bipolar disorder, autism, simple depression, schizophrenia, and the dozens of other disorders now included in psychiatric classification manuals. Yet sixty years ago, it was virtually impossible for a child to be considered mentally ill. The first diagnostic manual published by American psychiatrists in 1952, DSM-I, included among its 106 diagnoses only one for a child: Adjustment Reaction of Childhood/Adolescence. The other 105 diagnoses were specifically for adults. The number of children actually diagnosed with a mental disorder in the early 1950s would hardly move today’s needle. There were, at most, 7,500 children in various settings who were believed to be mentally ill at that time, and most of these had explicit neurological symptoms.

Of course, if there really are one thousand times as many kids with authentic mental disorders now as there were fifty years ago, then the explosion in drug prescriptions in the years since only indicates an appropriate medical response to a newly recognized pandemic, but there are other possible explanations for this meteoric rise. The last fifty years has seen significant social changes, many with a profound effect on children. Burgeoning birth rates, the decline of the extended family, widespread divorce, changing sexual and social mores, households with two working parents—it is fair to say that the whole fabric of life took on new dimensions in the last half century. The legal drug culture, too, became an omnipresent adjunct to daily existence. Stimulants, analgesics, sedatives, decongestants, penicillins, statins, diuretics, antibiotics, and a host of others soon found their way into every bathroom cabinet, while children became frequent visitors to the family physician for drugs and vaccines that we now believe are vital to our health and happiness. There is also the looming motive of money. The New York Times reported in 2005 that physicians who had received substantial payments from pharmaceutical companies were five times more likely to prescribe a drug regimen to a child than those who had refused such payments.

So other factors may well have contributed to the upsurge in psychiatric diagnoses over the past fifty years. But even if the increase reflects an authentic epidemic of mental health problems in our children, it is not certain that medication has ever been the right way to handle it. The medical “disease” model is one approach to understanding these behaviors, but there are others, including a hastily discarded psychodynamic model that had a good record of effective symptom relief. Alternative, less invasive treatments, too, like nutritional treatments, early intervention, and teacher and parent training programs were found to be at least as effective as medication in long-term reduction of a variety of symptoms (of ADHD, The MTA Cooperative Group 1999).

Nevertheless, the medical-pharmaceutical alliance has largely shrugged off other approaches and scoffed at the potential for conflicts of interest and continues to medicate children in ever-increasing numbers. With the proportion of diagnosed kids growing every month, it may be time to take another look at the practice and soberly reflect on whether we want to continue down this path. In that spirit, it is not unreasonable to ask whether this exponential expansion in medicating children has another explanation altogether. What if children are the same as they always were? After all, virtually every symptom now thought of as diagnostic was once an aspect of temperament or character. We may not have liked it when a child was sluggish, hyperactive, moody, fragile, or pestering, but we didn’t ask his parents to medicate him with powerful chemicals either. What if there is no such thing as mental illness in children (except the small, chronic, often neurological minority we once recognized)? What if it is only our perception of childhood that has changed? To answer this, we must look at our history and at our nature.

The human inclination to use psychoactive substances predates civilization. Alcohol has been found in late Stone Age jugs; beer may have been fermented before the invention of bread. Nicotine metabolites have been found in ancient human remains and in pipes in the Near East and Africa. Knowledge of Hul Gil, the “joy plant,” was passed from the Sumerians, in the fifth millennium b.c.e., to the Assyrians, then in serial order to the Babylonians, Egyptians, Greeks, Persians, Indians, then to the Portuguese who would introduce it to the Chinese, who grew it and traded it back to the Europeans. Hul Gil was the Sumerian name for the opium poppy. Before the Middle Ages, economies were established around opium, and wars were fought to protect avenues of supply.

With the modern science of chemistry in the nineteenth century, new synthetic substances were developed that shared many of the same desirable qualities as the more traditional sedatives and stimulants. The first modern drugs were barbiturates—a class of 2,500 sedative/hypnotics that were first synthesized in 1864. Barbiturates became very popular in the U.S. for depression and insomnia, especially after the temperance movement resulted in draconian anti-drug legislation (most notoriously Prohibition) just after World War I. But variety was limited and fears of death by convulsion and the Winthrop drug-scare kept barbiturates from more general distribution.

Stimulants, typically caffeine and nicotine, were already ubiquitous in the first half of the twentieth century, but more potent varieties would have to wait until amphetamines came into widespread use in the 1930s. Amphetamines were not widely known until the 1920s and 1930s when they were first used to treat asthma, hay fever, and the common cold. In 1932, the Benzedrine Inhaler was introduced to the market and was a huge over-the-counter success. With the introduction of Dexedrine in the form of small, cheap pills, amphetamines were prescribed for depression, Parkinson’s disease, epilepsy, motion sickness, night-blindness, obesity, narcolepsy, impotence, apathy, and, of course, hyperactivity in children.

Amphetamines came into still wider use during World War II, when they were given out freely to GIs for fatigue. When the GIs returned home, they brought their appetite for stimulants to their family physicians. By 1962, Americans were ingesting the equivalent of forty-three ten-milligram doses of amphetamine per person annually (according to FDA manufacturer surveys).

Still, in the 1950s, the family physician’s involvement in furnishing psychoactive medications for the treatment of primarily psychological complaints was largely sub rosa. It became far more widespread and notorious in the 1960s. There were two reasons for this. First, a new, safer class of sedative hypnotics, the benzodiazepines, including Librium and Valium, were an instant sensation, especially among housewives who called them “mothers’ helpers.” Second, amphetamines had finally been approved for use with children (their use up to that point had been “off-label,” meaning that they were prescribed despite the lack of FDA authorization).

Pharmaceutical companies, coincidentally, became more aggressive in marketing their products with the tremendous success of amphetamines. Valium was marketed directly to physicians and indirectly through a public relations campaign that implied that benzodiazepines offered sedative/hypnotic benefits without the risk of addiction or death from drug interactions or suicide. Within fifteen years of its introduction, 2.3 billion Valium pills were being sold annually in the U.S. (Sample 2005).

So, family physicians became society’s instruments: the suppliers of choice for legal mood-altering drugs. But medical practitioners required scientific authority to protect their reputations, and the public required a justification for its drug- seeking behavior. The pharmaceutical companies were quick to offer a pseudoscientific conjecture that satisfied both. They argued that neurochemical transmitters, only recently identified, were in fact the long sought after mediators of mood and activity. Psychological complaints, consequently, were a function of an imbalance of these neural chemicals that could be corrected with stimulants and sedatives (and later antidepressants and antipsychotics). While the assertion was pure fantasy without a shred of evidence, so little was known about the brain’s true actions that the artifice was tamely accepted. This would later prove devastating when children became the targets of pharmaceutical expansion.

With Ritalin’s FDA approval for the treatment of hyperactivity in children, the same marketing techniques that had been so successful with other drugs were applied to the new amphetamine. Pharmaceutical companies had a vested interest in the increase in sales; they spared no expense in convincing physicians to prescribe them. Cash payments, stock options, paid junkets, no-work consultancies, and other inducements encouraged physicians to relax their natural caution about medicating children. Parents also were targeted. For example, CIBA, the maker of Ritalin, made large direct payments to parents’ support groups like CHADD (Children and Adults with Attention Deficit/Hyperactivity Disorder) (The Merrow Report 1995). To increase the acceptance of stimulants, drug companies paid researchers to publish favorable articles on the effectiveness of stimulant treatments. They also endowed chairs and paid for the establishment of clinics in influential medical schools, particularly ones associated with universities of international reputation. By the mid 1970s, more than half a million children had already been medicated primarily for hyperactivity.

The brand of psychiatry that became increasingly popular in the 1980s and 1990s did not have its roots in notions of normal behavior or personality theory; it grew out of the concrete, atheoretical treatment style used in clinics and institutions for the profoundly disturbed. German psychiatrist Emil Kraepelin, not Freud, was the God of mental hospitals, and pharmaceuticals were the panacea. So the whole underlying notion of psychiatric treatment, diagnosis, and disease changed. Psychiatry, which had straddled psychology and medicine for a hundred years, abruptly abandoned psychology for a comfortable sinecure within its traditional parent discipline. The change was profound.

People seeking treatment were no longer clients, they were patients. Their complaints were no longer suggestive of a complex mental organization, they were symptoms of a disease. Patients were not active participants in a collaborative treatment, they were passive recipients of symptom-reducing substances. Mental disturbances were no longer caused by unique combinations of personality, character, disposition, and upbringing, they were attributed to pre-birth anomalies that caused vague chemical imbalances. Cures were no longer anticipated or sought; mental disorders were inherited illnesses, like birth defects, that could not be cured except by some future magic, genetic bullet. All that could be done was to treat symptoms chemically, and this was being done with astonishing ease and regularity.

In many ways, children are the ideal patients for drugs. By nature, they are often passive and compliant when told by a parent to take a pill. Children are also generally optimistic and less likely to balk at treatment than adults. Even if they are inclined to complain, the parent is a ready intermediary between the physician and the patient. Parents are willing to participate in the enforcement of treatments once they have justified them in their own minds and, unlike adults, many kids do not have the luxury of discontinuing an unpleasant medication. Children are additionally not aware of how they ought to feel. They adjust to the drugs’ effects as if they are natural and are more tolerant of side effects than adults. Pharmaceutical companies recognized these assets and soon were targeting new drugs specifically at children.

But third-party insurance providers balked at the surge in costs for treatment of previously unknown, psychological syndromes, especially since unwanted drug effects were making some cases complicated and expensive. Medicine’s growing prosperity as the purveyor of treatments for mental disorders was threatened, and the industry’s response was predictable. Psychiatry found that it could meet insurance company requirements by simplifying diagnoses, reducing identification to the mere appearance of certain symptoms. By 1980, they had published all new standards.

Lost in the process was the fact that the redefined diagnoses (and a host of new additions) failed to meet minimal standards of falsifiability and differentiability. This meant that the diagnoses could never be disproved and that they could not be indisputably distinguished from one another. The new disorders were also defined as lists of symptoms from which a physician could check off a certain number of hits like a Chinese menu, which led to reification, an egregious scientific impropriety. Insurers, however, with their exceptions undermined and under pressure from parents and physicians, eventually withdrew their objections. From that moment on, the treatment of children with powerful psychotropic medications grew unchecked.

As new psychotropics became available, their uses were quickly extended to children despite, in many cases, indications that the drugs were intended for use with adults only. New antipsychotics, the atypicals, were synthesized and marketed beginning in the 1970s. Subsequently, a new class of antidepressants like Prozac and Zoloft was introduced. These drugs were added to the catalogue of childhood drug treatments with an astonishing casualness even as stimulant treatment for hyperactivity continued to burgeon.

In 1980, hyperactivity, which had been imprudently named “minimal brain dysfunction” in the 1960s, was renamed Attention Deficit Disorder in order to be more politic, but there was an unintended consequence of the move. Parents and teachers, familiar with the name but not always with the symptoms, frequently misidentified children who were shy, slow, or sad (introverted rather than inattentive) as suffering from ADD. Rather than correct the mistake, though, some enterprising physicians responded by prescribing the same drug for the opposite symptoms. This was justified on the grounds that stimulants, which were being offered because they slowed down hyperactive children, might very well have the predicted effect of speeding up under -active kids. In this way, a whole new population of children became eligible for medication. Later, the authors of DSM-III memorialized this practice by renaming ADD again, this time as ADHD, and redefining ADD as inattention. Psychiatry had reached a new level: they were now willing to invent an illness to justify a treatment. It would not be the last time this was done.

In the last twenty years, a new, more disturbing trend has become popular: the re-branding of legacy forms of mental disturbance as broad categories of childhood illness. Manic depressive illness and infantile autism, two previously rare disorders, were redefined through this process as “spectrum” illnesses with loosened criteria and symptom lists that cover a wide range of previously normal behavior. With this slim justification in place, more than a million children have been treated with psychotropics for bipolar disorder and another 200,000 for autism. A recent article in this magazine “The Bipolar Bamboozle” (Flora and Bobby 2008) illuminates how and why an illness that once occurred twice in every 100,000 Americans, has been recast as an epidemic affecting millions.

To overwhelmed parents, drugs solve a whole host of ancillary problems. The relatively low cost (at least in out-of-pocket dollars) and the small commitment of time for drug treatments make them attractive to parents who are already stretched thin by work and home life. Those whose confidence is shaken by indications that their children are “out of control” or “unruly” or “disturbed” are soothed by the seeming inevitability of an inherited disease that is shared by so many others. Rather than blaming themselves for being poor home managers, guardians with insufficient skills, or neglectful caretakers, parents can find comfort in the thought that their child, through no fault of theirs, has succumbed to a modern and widely accepted scourge. A psychiatric diagnosis also works well as an authoritative response to demands made by teachers and school administrators to address their child’s “problems.”

Once a medical illness has been identified, all unwanted behavior becomes fruit of the same tree. Even the children themselves are often at first relieved that their asocial or antisocial impulses reflect an underlying disease and not some flaw in their characters or personalities.

Conclusions

In the last analysis, childhood has been thoroughly and effectively redefined. Character and temperament have been largely removed from the vocabulary of human personality. Virtually every single undesirable impulse of children has taken on pathological proportions and diagnostic significance. Yet, if the psychiatric community is wrong in their theories and hypotheses, then a generation of parents has been deluded while millions of children have been sentenced to a lifetime of ingesting powerful and dangerous drugs.

Considering the enormous benefits reaped by the medical community, it is no surprise that critics have argued that the whole enterprise is a cynical, reckless artifice crafted to unfairly enrich them. Even though this is undoubtedly not true, physicians and pharmaceutical companies must answer for the rush to medicate our most vulnerable citizens based on little evidence, a weak theoretical model, and an antiquated and repudiated philosophy. For its part, the scientific community must answer for its timidity in challenging treatments made in the absence of clinical observation and justified by research of insufficient rigor performed by professionals and institutions whose objectivity is clearly in question, because their own interests are materially entwined in their findings.

It should hardly be necessary to remind physicians that even if their diagnoses are real, they are still admonished by Galen’s dictum Primum non nocere, or “first, do no harm.” If with no other population, this ought to be our standard when dealing with children. Yet we have chosen the most invasive, destructive, and potentially lethal treatment imaginable while rejecting other options that show great promise of being at least as effective and far safer. But these other methods are more expensive, more complicated, and more time-consuming, and thus far, we have not proved willing to bear the cost. Instead, we have jumped at a discounted treatment, a soft-drink- machine cure: easy, cheap, fast, and putatively scientific. Sadly, the difference in price is now being paid by eight million children.

Mental illness is a fact of life, and it is naïve to imagine that there are not seriously disturbed children in every neighborhood and school. What is more, in the straitened economy of child rearing and education, medication may be the most efficient and cost effective treatment for some of these children. Nevertheless, to medicate not just the neediest, most complicated cases but one child in every ten, despite the availability of less destructive treatments and regardless of doubtful science, is a tragedy of epic proportions.

What we all have to fear, at long last, is not having been wrong but having done wrong. That will be judged in a court of a different sort. Instead of humility, we continue to feed drugs to our children with blithe indifference. Even when a child’s mind is truly disturbed (and our standards need to be revised drastically on this score), a treatment model that intends to chemically palliate and manage ought to be our last resort, not our first option. How many more children need to be sacrificed for us to see the harm in expediency, greed, and plain ignorance?

Andrew Weiss holds a PhD in school-clinical psychology from Hofstra University. He served on the faculty of Iona College and has been a senior school administrator in Chappaqua, New York. He has published a number of articles on technology in education. E-mail: anweiss [at] optonline.net.

February 2, 2014 Posted by | Science and Pseudo-Science, Timeless or most popular | , , , , , | 3 Comments

Making a Killing: The Untold Story of Psychotropic Drugging

COTO Report

Psychotropic drugs. It’s the story of big money-drugs that fuel a $330 billion psychiatric industry, without a single cure.

The cost in human terms is even greater — these [legal] drugs now kill an estimated 42,000 people every year.

And the death count keeps rising. Containing more than 175 interviews with lawyers, mental health experts, the families of victims and the survivors themselves, this riveting documentary rips the mask off psychotropic drugging and exposes a brutal but well-entrenched money-making machine.

Before these drugs were introduced in the market, people who had these conditions would not have been given any drugs at all.

So it is the branding of a disease and it is the branding of a drug for a treament of a disease that did not exist before the industry made the disease. (Excerpt from cchr.org)

January 29, 2013 Posted by | Deception, Science and Pseudo-Science, Timeless or most popular, Video | , , , , , , , | Leave a comment

Rate of Patients in Psychiatric Hospitals has Fallen to Level of 1850

By Matt Bewig | AllGov | December 28, 2012

Either 21st century Americans are saner than ever before or we’re too sick as a society to properly care for the mentally ill among us, but the fact is that fewer of us are receiving mental health care in psychiatric hospitals. According to “No Room at the Inn: Trends and Consequences of Closing Public Psychiatric Hospitals,” a study by the Treatment Advocacy Center, per capita state psychiatric bed populations plunged in 2010 to 14 beds per 100,000 population, identical to 1850, when the movement to treat seriously mentally ill persons in hospitals began. The number peaked at 300 beds per 100,000 in 1950, and has been declining ever since.

Using data from the National Association of State Mental Health Program Directors (NASMHPD) Research Institute, Dr. E. Fuller Torrey and four co-authors show that just from 2005 to 2010, the number of state psychiatric beds decreased by 14%, from 50,509 to 43,318. Noting that states have continued to eliminate beds since 2010, the report concludes that “many states appear to be effectively terminating a public psychiatric treatment system that has existed for nearly two centuries. The system was originally created to protect both the patients and the public, and its termination is taking place with little regard for the consequences to either group.”

Given the lack of hospital care, many of the most severely mentally ill, especially those whose conditions make it difficult for them to conform to social norms or control behaviors, wind up in hospital emergency departments, jails and prisons, all of which suffer as a result.

• Hospital emergency rooms are overcrowded with acutely ill patients who wait days or weeks for a psych bed to open; many are released without treatment.

• In some communities, as many as two-thirds of the homeless population is mentally ill, leading to frequent encounters with law enforcement.

• Jails and prisons are increasingly filled with the mentally ill, with some facilities reporting that one-third or more of their inmates are severely mentally ill.

Not surprisingly, the study found that states that cut funding for public hospitals experienced increased arrest-related deaths, as well as higher rates of violent crime generally, especially aggravated assault.

To Learn More:

No Room at the Inn: Trends and Consequences of Closing Public Psychiatric Hospitals (by E. Fuller Torrey, M.D., et al., Treatment Advocacy Center) (pdf)

Newtown Shooting Put Spotlight on U.S. Mental Health Care–Again (by Sydney Lupkin, ABC News)

December 28, 2012 Posted by | Timeless or most popular | , , , , , | 1 Comment