Archive for January, 2011

Global Warning

Monday, January 17th, 2011

People hear climate change through different concerns. Some hear threats to the environment, others to people, and others still to national security. (Of course, there is overlap.)

For those in the national security category, the National Security Journalism Initiative has created Global Warning.

Water shortages in Yemen
Water shortages in Yemen

Go to A Complex Climate Threat and click on water management—Morocco spends more than 1/5 of its budget on water management, and Sana’a could run out of water in 2025.

 Pakistan floods 2010
Pakistan floods 2010

Click on flooding:

Last summer, record floods ravaged Pakistan, killing nearly 2,000 people, damaging or destroying 1.2 million homes and laying waste to large portions of farmland. Afterward, 34 percent of the rice crop was gone and cholera swept through camps, affecting tens of thousands of people.

During previous disasters in unstable regions — the 2004 tsunami in Southeast Asia, the 1998-2000 drought in Central Asia — terrorist groups stepped in where governments failed, winning supporters with their aid. There were reports of such efforts following the Pakistan floods.

Click on energy shifts:

Since the Industrial Revolution, economic growth has been propelled by fossil fuel emissions. Switching to alternative fuels would change the foundation of the global economy.

While higher energy prices would give major oil exporters resources to increase their power, a shift away from fossil fuels could force changes in petro-regimes. The National Intelligence Council predicts Saudi Arabia, which would absorb the biggest shock, would face new pressures to institute major economic reforms, including women’s full participation in the economy.

Click on international trade:

Climate change stands to disrupt global markets, alter key trading routes and affect natural resource supplies.

After more than one-third of Russia’s grain crop was destroyed last summer by a devastating heat wave and fires — extreme weather events that President Dmitry Medvedev called “evidence of this global climate change” — the country banned all grain exports until the end of the year, causing a price spike in global markets. Less than a week later, food riots broke out in Mozambique.

Go here for videos on threats from climate change to California agriculture, NYC, and Houston energy infrastructure.

Recent articles include

Our man in the greenhouse: Why the CIA is spying on a changing climate

This summer, as torrential rains flooded Pakistan, a veteran intelligence analyst named Larry watched closely from his desk at CIA headquarters just outside the capital.

For Larry, head of the CIA’s year-old Center on Climate Change and National Security, the worst natural disaster in Pakistan’s history is a warning.

“It has the exact same symptoms you would see for future climate change events, and we’re expecting to see more of them,” Larry, who asked his last name not be used for security reasons, said in a recent interview at the CIA. “We wanted to know: What are the conditions that lead to a situation like the Pakistan flooding? What are the important things for water flows, food security, [displaced people], radicalization, disease?”

As intelligence officials assess key components of state stability like these, they are realizing that the norms they had been operating with — like predictable river flows and crop yields — are shifting.

But the U.S. government is ill-prepared to act on changes that are coming faster than anticipated and threaten to bring instability to places of U.S. national interest, according to interviews with several dozen current and former officials and outside experts, and a review of two decades’ worth of government reports. Climate projections lack critical detail, they say, and information about how people react to changes — for instance, by migrating — is sparse. Military brass say they don’t yet have the intelligence they need in order to act.

Drying Peru
Losing the Andes glaciers

Glacier melt hasn’t caused a national crisis in Peru, yet. But high in the Andes, rising temperatures and changes in water supply have decimated crops, killed fish stocks and forced entire villages to question how they will survive for another generation.

U.S. officials are watching closely because without quick intervention, they say, the South American nation could become an unfortunate case study in how climate change can destabilize a strategically important region and, in turn, create conditions that pose a national security threat to Americans thousands of miles away.

Think what it would be like if the Andes glaciers were gone and we had millions and millions of hungry and thirsty Southern neighbors,” said former CIA Director R. James Woolsey. “It would not be an easy thing to deal with.”

Glaciers in the South American Andes are melting faster than many scientists predicted, causing a dramatic change in the region’s availability of water for drinking, irrigation and electricity. Some climate change experts estimate entire glaciers will disappear in 10 years due to rising global temperatures, threatening to create instability across the globe long before their ultimate demise.

Oil rig damaged by Ike
Oil rig damaged by Ike

Houston oil infrastructure exposed to storms

The largest search and rescue operation in U.S. history; the largest Texas evacuation ever; a $30 billion price tag and 112 deaths in the U.S….And Ike was only a Category 2 storm with mild-for-a-hurricane winds of 109 mph.

If Ike had been a direct hit on the channel, refineries would have been flooded with seawater despite 16-foot fortifications, likely requiring months of repairs and prolonging supply disruptions, according to analysis by the Severe Storm Prediction, Education and Evacuation from Disasters Center at Rice University.

Not only is the sea level rising, the land is sinking.

Disease: A top U.S. security threat

One of the most worrisome national security threats of climate change is the increased spread of disease, with potentially millions of people at risk of serious illness or death and vast numbers of animals and crops also in danger of being wiped out, U.S. intelligence and health officials say.

But more than a decade after such concerns were first raised by U.S. intelligence agencies, significant gaps remain in the health surveillance and response network—not just in developing nations, but in the United States as well, according to those officials and a review of federal documents and reports.

And those gaps, they say, undermine the ability of the U.S. and world health officials to respond to disease outbreaks before they become national security threats.

U.S. military grasps effects of the rising tide

Climate change is fast becoming one of those security threats, according to U.S. and Bangladeshi officials, who have concluded it will help create new conflict hotspots around the world and heighten the tensions in existing ones—and impact the national security of the United States in the process. Moreover, climate change could overstress the U.S. military by creating frequent and intensified disasters and humanitarian crises to which it would have to respond.

Nowhere is that potential chain of events more worrisome than in Bangladesh, a country strategically sandwiched between rising superpowers China and India, and which also acts as a bridge between South Asia and South East Asia.

Already, Bangladesh is beset by extreme poverty, overcrowding and flooding that frequently render large numbers of people homeless. The Muslim-majority country also has had problems with Islamist radicalization.

And over the next two generations, those problems are expected to get worse due to climate change, which worsens other problems such as food and water scarcity, extreme weather and rising seas, according to interviews with current and former officials and experts. By 2050, rising sea waters are projected to cost the low-lying country about 17 to 20 percent of its land mass, rendering at least 20 million people homeless and decimating food production of rice and wheat, according to the United Nations Intergovernmental Panel on Climate Change. By then, its population is projected to reach more than 200 million, which could lead to internal societal unrest that spills over into neighboring India.

Dirty Coal, Clean Future

Saturday, January 15th, 2011

coal
coal miner

Mining coal is notoriously dangerous, the remnants of those mines disfigure the Earth, and the by-products of coal’s combustion fill the air not simply with soot, smoke, and carbon dioxide but also with toxic heavy metals like mercury and lead, plus corrosive oxides of nitrogen and sulfur, among other pollutants. When I visited coal towns in China’s Shandong and Shanxi provinces, my face, arms, and hands would be rimed in black by the end of each day—even when I hadn’t gone near a mine. People in those towns, like their predecessors in industrial-age Europe and America, have the same black coating on their throats and lungs, of course. When I have traveled at low altitude in small airplanes above America’s active coal-mining regions—West Virginia and Kentucky in the East, Wyoming and its neighbors in the Great Basin region of the West—I’ve seen the huge scars left by “mountain top removal” and open-pit mining for coal, which are usually invisible from the road and harder to identify from six miles up in an airliner. Compared with most other fossil-fuel sources of energy, coal is inherently worse from a carbon-footprint perspective, since its hydrogen atoms come bound with more carbon atoms, meaning that coal starts with a higher carbon-to-hydrogen ratio than oil, natural gas, or other hydrocarbons.

Shanxi
Shanxi

James Fallows, in his The Atlantic article, Dirty Coal, Clean Future, is not oblivious to coal’s faults, and he explains in some depth about coal’s rather large percentage of carbon dioxide emissions. Unfortunately, the scale of the climate change problem is huge:

As one climate scientist put it to me, “To stabilize the CO2 concentration in the atmosphere, the whole world on average would need to get down to the Kenya level”—a 96 percent reduction for the United States. The figures also suggest the diplomatic challenges for American negotiators in recommending that other countries, including those with hundreds of millions in poverty, forgo the energy-intensive path toward wealth that the United States has traveled for so many years.

The reduction needed is even more than 96% when we add in a portion of greenhouse gas emissions from China, where half of electricity is used to manufacture for export. Unfortunately, we will use coal in the future, a lot:

Precisely because coal already plays such a major role in world power supplies, basic math means that it will inescapably do so for a very long time. For instance: through the past decade, the United States has talked about, passed regulations in favor of, and made technological breakthroughs in all fields of renewable energy. Between 1995 and 2008, the amount of electricity coming from solar power rose by two-thirds in the United States, and wind-generated electricity went up more than 15-fold. Yet over those same years, the amount of electricity generated by coal went up much faster, in absolute terms, than electricity generated from any other source. The journalist Robert Bryce has drawn on U.S. government figures to show that between 1995 and 2008, “the absolute increase in total electricity produced by coal was about 5.8 times as great as the increase from wind and 823 times as great as the increase from solar”—and this during the dawn of the green-energy era in America. Power generated by the wind and sun increased significantly in America last year; but power generated by coal increased more than seven times as much… Similar patterns apply even more starkly in China. Other sources of power are growing faster in relative terms, but year by year the most dramatic increase is in China’s use of coal.

storing carbon dioxide
storing carbon dioxide

The price of making coal clean, capturing and storing the carbon dioxide, includes a huge energy cost, perhaps 30% increase or more to make the same amount of electricity.

“When people like me look for funding for carbon capture, the financial community asks, ‘Why should we do that now?’” an executive of a major American electric utility told me. “If there were a price on carbon”—a tax on carbon-dioxide emissions—“you could plug in, say, a loss of $30 to $50 per ton, and build a business case.”

Looking at US policy in isolation, there is little reason for optimism, as utilities are refusing to ask ratepayers to pay an extra 3 – 5 cent/kWh for coal. Looking at the US and China together, though…

Ming Sung
Ming Sung from the Clean Air Task Force and

Julio Friedmann
Julio Friedmann from Lawrence Livermore National Laboratory

In the normal manufacturing supply chain—Apple creating computers, Walmart outsourcing clothes and toys—the United States provides branding, design, and a major market for products, while China supplies labor, machines, and the ability to turn concepts into products at very high speed.

But there is more cooperation with coal:

In the search for “progress on coal,” like other forms of energy research and development, China is now the Google, the Intel, the General Motors and Ford of their heyday—the place where the doing occurs, and thus the learning by doing as well. “They are doing so much so fast that their learning curve is at an inflection that simply could not be matched in the United States,” David Mohler of Duke Energy told me.

“In America, it takes a decade to get a permit for a plant,” a U.S. government official who works in China said. “Here, they build the whole thing in 21 months. To me, it’s all about accelerating our way to the right technologies, which will be much slower without the Chinese.

“You can think of China as a huge laboratory for deploying technology,” the official added. “The energy demand is going like this”—his hand mimicked an airplane taking off—“and they need to build new capacity all the time. They can go from concept to deployment in half the time we can, sometimes a third. We have some advanced ideas. They have the capability to deploy it very quickly. That is where the partnership works.”

The good aspects of this partnership have unfolded at a quickening pace over the past decade, through a surprisingly subtle and complex web of connections among private, governmental, and academic institutions in both countries. Perhaps I should say unsurprisingly, since the relationships among American and Chinese organizations in the energy field in some ways resemble the manufacturing supply chains that connect factories in China with designers, inventors, and customers in the United States and elsewhere. The difference in this case is how much faster the strategic advantage seems to be shifting to the Chinese side.

Take home point: We need to add a cost to greenhouse gas emissions in the United States and elsewhere in the $30 – 50 range if we are to stop using coal without carbon capture and storage.

Another New Yorker article: Jevons Paradox— Does Improving Efficiency Do Any Good?

Monday, January 3rd, 2011

The New Yorker has done much to introduce non-scientists to scientific thinking (eg, Kolbert’s articles on climate change), but now aims to confuse us, or so it appears, by presenting real concerns in a too simplistic manner. David Owen’s recent article discusses Jevons Paradox in The Efficiency Dilemma has been attacked by critics who object to his omissions. Truth sometimes lies in the middle, but in this case, Truth appears to be more towards the extremes, with a caveat, it depends on where and for what.

Jevons pointed out a century and a half ago that increased efficiency can lead to lower prices and thus to consumption greater than if there had been no improvement. Rebound effect is the term used when increased efficiency leads to lower consumption, but the decrease is made smaller by behavior change.

From the article:
• Consumption increases as costs go down. Because refrigerators are so much cheaper to operate, Owen says, they have spread to hotel rooms and gas stations. Additional energy loss (and increases in greenhouse gas emissions) occur as we increase the amount of food we buy and waste (and consume) as refrigerator size increases. Altogether, per capita energy consumption due to all these changes has presumably grown even as energy to power residential refrigerators has gone down. Other examples are the rapid increase in air conditioning and the size of (and number of) houses in the South, and increases in lighting use in the US so “that darkness itself is spoken of an endangered natural resource”—increases in efficiency mean that the typical person uses more energy for both lighting and air conditioning.
• Increased efficiency in automobiles has been devoted to increasing horsepower and weight rather than fuel economy.
• Decreases in cost increases both the number of car owners and the number of vehicle miles traveled per car per year.

Changing energy use in refrigerators
Changing energy use in refrigerators

The Jevons Paradox is still considered a factor in many parts of the world. For example, the introduction of cheap, efficient cars to India (the Nano) was expected to lead to increased consumption of oil. (Between March 2009 and January 2011, some 1 million cars were sold—see here for some reasons why the Nano hasn’t taken off, although this may still happen). Cheap solar panels (expensive compared to prices paid where there is a reliable grid, but cheap relative to the cost of a long ride in a motorcycle taxi to recharge the phone) and efficient light bulbs in Africa also lead to increased energy use, but a low-greenhouse gas form. There is great enthusiasm about the latter, but I have yet to hear policy experts wax enthusiastic about the Nano. The policy community appreciates the need to make more energy available to the poor. However, the need for more cars is either less clear than needs for phones and lightbulbs, or the downsides of adding more photovoltaics are smaller than the problems of using more oil, of which climate change is just one.

increased energy use in Kiptusuri
Increased energy use in Kiptusuri, Kenya

Of course, oil use is increasing in India anyway, as Indians become wealthier. Owen fails to discuss the effects of increased wealth on people’s choices, a fairly large omission, so would attribute the increase solely to the more efficient automobiles. (Nor does Owens consider the time for stock turnover.)

The rebound effects in 21st century US are of a different scale than the examples above. We already leave our lights on. A lot. We own considerably more than one car per licensed driver, 842 cars/1000 people (compared to 12/1000 in India). So it’s unlikely that the introduction of more efficient cars will lead to as dramatic an increase in fuel use as in India. Or that more efficient bulbs will produce the increase in lighting now being seen in Kiptusuri.

According to Effectiveness and Impact of Corporate Average Fuel Economy (CAFE) Standards, CAFE standards appear to have a 10 – 20% rebound effect, while changes in Europe produce a rebound effect of 20 – 30% (the difference is due to those shifting from public transit).

The rebound effect for cars today in the US may be greater than for refrigerators, now that the market for refrigerators is apparently saturated. (I’ve heard people in policy wonder when the US will reach saturation for automobiles—there has to be a point at which nothing can push Americans to drive more.)

There are three causes for the rebound effect, according to Energy Efficiency and the Rebound Effect: Does Increasing Efficiency Decrease Demand? (pdf)

Direct Effects – The consumer chooses to use more of the resource instead of realizing the energy cost savings. For example, a person with a more efficient home heater may chose to raise the setting on the thermostat or a person driving a more efficient car may drive more. This effect is limited since a person will only set the thermostat so high or have so many hours to spend driving.

Indirect Effects – The consumer chooses to spend the money saved by buying other goods which use the same resource. For example, a person whose electric bill decreases due to a more efficient air conditioner may use the savings to buy more electronic goods.

Market or Dynamic Effects
– Decreased demand for a resource leads to a lower resource price, making new uses economically viable. For example, residential electricity was initially used mainly for lighting, but as the price dropped many new electric devices became common. This is the most difficult aspect of the rebound effect to predict and to measure.

See the paper for the scale of the rebound effect, which is close to 0% for home appliances, 10 – 30% for cars, and 0 – 50% for space cooling.

Even advocates of energy efficiency see a need to do more. In Leaping the Energy Gap (subscription required), Dan Charles says,

Experience has shown that there is more to saving energy than designing better light bulbs and refrigerators. Researchers say it will need a mixture of persuasion, regulation, and taxation.

(August 14, 2009 Science)

A frequently touted statistic is that while per capita US electricity use increased 40% over the last 3 decades, it remained flat in California. Some credit the efficiency mandates in California. That appears to be true only in part:

Anant Sudarshan and James Sweeney of Stanford University’s Precourt Energy Efficiency Center (PEEC) recently calculated that the state’s energy policies can take credit for only a quarter of California’s lower per capita electricity use. The rest is due to “structural factors” such as mild weather, increasing urbanization, larger numbers of people in each household, and high prices for energy and land that drove heavy industry out of the state.

Art Rosenfeld
Art Rosenfeld

An old economic assumption is that if scientists add efficiency, the consumer will come.

[Art] Rosenfeld [who was the most important person pushing California’s push toward higher efficiency] and Edward Vine had a friendly, long-running argument during their 2 decades as colleagues at [Lawrence Berkeley National Laboratory]. Rosenfeld believed in technology. When he testified before the U.S. Congress, as he did frequently in the early 1980s, he always came with props in hand: compact fluorescent light bulbs, heat-shielding windows, or computer programs for predicting the energy use of new buildings. But Vine, whose Ph.D. is in human ecology, wasn’t convinced of technology’s power. “We can’t assume, if we have a great technology, that people will rush to stores and buy it,” Vine says. “We need to find out how people behave, how they make decisions, how they use energy, and we need to work with them.”

For the most part, energy-efficiency programs around the country have followed Rosenfeld’s line. They offer financial incentives for adopting energy-saving, cost-effective technology, and trust that consumers will follow their economic self-interest.

Yet many researchers are now coming around to Vine’s point of view. Consumers don’t seem to act like fully informed, rational decision-makers when they make energy choices. Many avoid making choices at all. Give them a programmable thermostat, and they won’t program it. Offer them an efficient light bulb that pays for itself in 2 years, and they won’t buy it.

Some points made by the article:
• The goal is to decrease energy use per person—stable energy use is not enough.
• Even for-profit companies don’t realize how much money can be saved on energy [and companies do much better than individuals].
• In a crisis, people respond to a need for “good citizens”. Some percentage of that change in behavior remains after the crisis ends.
• We see waste in others reflecting their “inner characters” and “own wasteful practices as the product of circumstances”, so information about the need rarely helps.
• Role models do help.
• We care what others are doing. Sacramento Municipal Utility District included information with the bills about how one’s energy use compares to one’s neighbors, and energy use declined 2%. [Information about saving energy left on your door knob is ineffective if accompanied by the importance of saving money or saving the earth, but is effective if we are told that our neighbors are doing it.]
• The current market option, more efficient and more expensive appliances targeting high-end customers, is less effective than selling these appliances at Costco or Walmart.
• Social marketing works, at least in some places, such as Hood River, OR, where 85% of the homes got energy audits and free efficiency upgrades.

[Hugh] Peach compared the process to a political campaign. The utility sat down with local leaders, followed their advice, and relied heavily on local volunteers. The process was time-consuming and labor-intensive but, Peach says, a pleasure. There was “a lot of community spirit. People just saw it as the right thing to do.”

• Feedback helps, eg, the Prius dashboard showing car drivers their rate of energy use. There is hope that Smart Meters will lead consumers to reduce energy use in their home, first by cutting use, eventually shifting to more efficient appliances.
• Green buildings don’t do nearly as well as advertised, and architects get too little feedback on how energy use changes as a result of their work. In a response to this article, several examples are given for projects where actual energy use came in at least double predicted energy use.
• There are a number of perverse incentives: people away from home have little incentive to reduce energy and water use. Landlords have little incentive to purchase more expensive more efficient appliances. Cable services provide boxes which use 40 W 24 hours/day and have no incentive to spend a tad more on reducing energy use. These perverse incentives might be responsible for 1/4 of US residential energy use. In Japan, on the other hand, vending machine suppliers pay for the electricity, and vending machines are more efficient.
• Really, adding a cost to energy is necessary, because we need to see the cost of our behavior, which goes beyond the price we pay today for energy.

In Behavior and Energy Policy, (subscription required, March 5, 2010 Science), there is more discussion of how to combine greater energy efficiency with changed behavior.

Summary: Jevons Paradox appears more important in less saturated markets and other factors, such as increased wealth should be considered. Increased efficiency does reduce energy use in the US, but if our goal is to mitigate greenhouse gas emissions quickly enough, we may want to move to “mixture of persuasion, regulation, and taxation.”

Comments from others: See The National Geographic blog for the comments of James Barrett (Clean Economy Development Center) and Matthew Kahn (UCLA)

Recent Article in the New Yorker, Is There Something Wrong with the Scientific Method?

Saturday, January 1st, 2011

In an attempt to point out that not every article that makes it into peer review survives the scrutiny of the science community, New Yorker author Jonah Lehrer apparently goes a little further than he intended, and says so here. The Truth Wears Off begins with a number of examples of when the effects described in peer reviewed articles don’t seem to be real, notably in medicine, the life sciences, and psychology. Lehrer gives some examples from physics, as well.

To some, it appears that the effect first seen declines over time. Examples:

• people shown a face and asked to describe it showed a lower ability to recognize the face (verbal overshadowing) two decades ago, but the effect shrank dramatically year after year.

• Anti-psychotic drugs tested in the 1990s appear to be less effective today. Note: the article leaves unexplained whether the schizophrenics in this study are similar to schizophrenics studied a decade ago—this includes severity of and type of symptoms, and any other treatments they may have received.

• In an ESP test from early last century, some initially appeared to show paranormal ability, but further tests failed to substantiate this result.

• A purported correlation between female barn swallows and symmetry in their mates led to a number of studies finding similar results for swallows and other species, but the correlation has since disappeared. Michael Jennions found that a large number of results in ecology and evolutionary biology demonstrate this decline effect.

In an apparent misunderstanding of the process, Lehrer discusses the problem when “rigorously validated findings” can no longer be replicated as a problem with science. Most scientists would assume there is a problem with both the findings and the sloppiness that leads to a large number of poor results.

Lehrer then discusses a few problems in the article, but does not tease out the importance of each:

• Journals and scientists look for results that disagree with the orthodoxy. Scientists are less likely to submit null results to journals and journals are less likely to print them. Once the orthodoxy changes (from symmetry is irrelevant to symmetry is important to female barn swallows), confounding results become interesting. Note: This is considered a real phenomenon, but Lehrer gives little idea as to whether this is a problem with 0.5% or 95% of articles submitted. Climate change skeptics—if results are submitted to peer review which are contrary to scientific orthodoxy on climate change, these results will get prominent play, if they make it through peer review.

• The barn swallow studies were not double blind studies, with different people measuring feather length and assessing behavior. When it came time to round up or down, errors crept into measurements that differed by millimeters. Similarly, published acupuncture results vary by country, in part because the person testing for the effect knows whether acupuncture has been used.

• A number of studies, such as those finding genetic effects on hypertension and schizophrenia, were so badly done that the results are meaningless. One review of 432 such results found the vast majority worthless. Note: This is considered an important problem in some fields of science, notably medicine, and also my field, education. See comments below for what those in the life sciences and medicine think. There appears to be little support for Lehrer’s including physics experiments in his article.

• Lehrer assumes that all the later-refuted results were analyzed statistically in an appropriate way. Note: Statisticians do not, see Andrew Gelman’s comment below.

Are there reasons that explain these results besides the one favored by many, that science is a crapshoot? The person who told me of this article certainly feels that way; he picks and chooses among scientific results, except when he knows scientists are wrong and so goes with other analysis.

Lehrer says, “We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.” Only science doesn’t prove so much as disprove, and what is left standing gains credibility.

Lehrer does not provide enough information or context so that we can make sense of what he says. He repeats what everyone in science already knows: that research in some fields, and some peer review, is of lower quality, and that while a number of peer review results turn out to be uninteresting, this is much more often true in medicine and some of the life sciences. The one important point I got from the article, that results that no longer appear to be true are still used by some doctors, disappears among the noise.

Not mentioned is that people whose exposure to science comes primarily from articles on medicine see reason to doubt medical science, and many extrapolate to other fields of science. Those who prefer to doubt science will find justification in this article.

Comments from others
Jerry Coyne believes that his field, evolutionary biology, has a problem, in part because not many eyes look at each result.

I tend to agree with Lehrer about studies in my own field of evolutionary biology. Almost no findings are replicated, there’s a premium on publishing positive results, and, unlike some other areas, findings in evolutionary biology don’t necessarily build on each other: workers usually don’t have to repeat other people’s work as a basis for their own. (I’m speaking here mostly of experimental work, not things like studies of transitional fossils.) Ditto for ecology. Yet that doesn’t mean that everything is arbitrary. I’m pretty sure, for instance, that the reason why male interspecific hybrids in Drosophila are sterile while females aren’t (“Haldane’s rule”) reflects genes whose effects on hybrid sterility are recessive. That’s been demonstrated by several workers. And I’m even more sure that humans are more closely related to chimps than to orangutans. Nevertheless, when a single new finding appears, I often find myself wondering if it would stand up if somebody repeated the study, or did it in another species.

But let’s not throw out the baby with the bathwater. In many fields, especially physics, chemistry, and molecular biology, workers regularly repeat the results of others, since progress in their own work demands it. The material basis of heredity, for example, is DNA, a double helix whose sequence of nucleotide bases codes (in a triplet code) for proteins. We’re beginning to learn the intricate ways that genes are regulated in organisms. The material basis of heredity and development is not something we “choose” to believe: it’s something that’s been forced on us by repeated findings of many scientists. This is true for physics and chemistry as well, despite Lehrer’s suggestion that “the law of gravity hasn’t always been perfect at predicting real-world phenomena.”

Lehrer, like Gould in his book The Mismeasure of Man, has done a service by pointing out that scientists are humans after all, and that their drive for reputation—and other nonscientific issues—can affect what they produce or perceive as “truth.” But it’s a mistake to imply that all scientific truth is simply a choice among explanations that aren’t very well supported. We must remember that scientific “truth” means “the best provisional explanation, but one so compelling that you’d have to be a fool not to accept it.” Truth, then, while always provisional, is not necessarily evanescent. To the degree that Lehrer implies otherwise, his article is deeply damaging to science.

[Note: most scientists in physics, chemistry, and molecular biology, so far as I know, agree.]

David Gorski, an advocate of science-based medicine, says that people in medicine have been talking about a number of these issues for years, however, Lehrer goes too far in generalizing poor medical studies into problems with science.

Jennions’ article was entitled Relationships fade with time: a meta-analysis of temporal trends in publication in ecology and evolution. Reading the article, I was actually struck by how relatively small, at least compared to the impression that Lehrer gave in his article, the decline effect in evolutionary biology was found to be in Jennions’ study. Basically, Jennions examined 44 peer-reviewed meta-analyses and analyzed the relationship between effect size and year of publication; the relationship between effect size and sample size; and the relationship between standardized effect size and sample size. To boil it all down, Jennions et al concluded, “On average, there was a small but significant decline in effect size with year of publication. For the original empirical studies there was also a significant decrease in effect size as sample size increased. However, the effect of year of publication remained even after we controlled for sampling effort.” They concluded that publication bias was the “most parsimonious” explanation for this declining effect.

Personally, I’m not sure why Jennions was so reluctant to talk about such things publicly. You’d think from his responses in Lehrer’s interview that scientists would be coming for him with pitchforks, hot tar, and feathers if he dared to point out that effect sizes reported by investigators in his scientific discipline exhibit small declines over the years due to publication bias and the bandwagon effect. Perhaps it’s because he’s not in medicine; after all, we’ve been speaking of such things publicly for a long time. Indeed, we generally expect that most initially promising results, even in randomized trials, will not ultimately pan out. In any case, those of us in medicine who might not have been willing to talk about such phenomena became more than willing after John Ioannidis published his provocatively titled article Why Most Published Research Findings Are False around the time of his study Contradicted and Initially Stronger Effects in Highly Cited Clinical Research. Physicians and scientists are generally aware of the shortcomings of the biomedical literature. Most, but sadly not all of us, know that early findings that haven’t been replicated yet should be viewed with extreme skepticism and that we can become more confident in results the more they are replicated and built upon, particularly if multiple lines of evidence (basic science, clinical trials, epidemiology) all converge on the same answer. The public, on the other hand, tends not to understand this.

Gorski also discusses the effect of subject popularity on calculations of error rates. Commenters look at the challenges Lehrer presents from physical science, and do not support his conclusions.

It’s always good to run your results by someone who is very good at statistics. Andrew Gelman, statistician, says,

The short story is that if you screen for statistical significance when estimating small effects, you will necessarily overestimate the magnitudes of effects, sometimes by a huge amount. I know that Dave Krantz has thought about this issue for awhile; it came up when Francis Tuerlinckx and I wrote our paper on Type S errors, ten years ago.

My current thinking is that most (almost all?) research studies of the sort described by Lehrer should be accompanied by retrospective power analyses, or informative Bayesian inferences. Either of these approaches–whether classical or Bayesian, the key is that they incorporate real prior information, just as is done in a classical prospective power analysis–would, I think, moderate the tendency to overestimate the magnitude of effects.

Note: I don’t understand statistics, or Gelman’s solutions, but I learned early on that poor statistics is the downfall of many a conjecture.

PZ Myers, biologist

Early in any scientific career, one should learn a couple of general rules: science is never about absolute certainty, and the absence of black & white binary results is not evidence against it; you don’t get to choose what you want to believe, but instead only accept provisionally a result; and when you’ve got a positive result, the proper response is not to claim that you’ve proved something, but instead to focus more tightly, scrutinize more strictly, and test, test, test ever more deeply.

Steven Novella, neurologist, discusses how the naive, the skeptical (scientists mostly fit in this category), and the deniers see science, then says,

Lehrer is ultimately referring to aspects of science that skeptics have been pointing out for years (as a way of discerning science from pseudoscience), but Lehrer takes it to the nihilistic conclusion that it is difficult to prove anything, and that ultimately “we still have to choose what to believe.” Bollocks!

John Horgan sees this as the decline of illusion. He is not a big fan of truthiness.

Lehrer’s reference to physics was checked by Charles Petit. He quotes Lawrence Krauss,


“The physics references are (deposit scatological bovine expletive here) … the neutron data have fallen, reflecting under-estimation of errors, but the lower lifetime doesn’t change anything having to do with the model of the neutron, which is well understood and robust … And as for discrepancies with gravity, the deep borehole stuff is interesting but highly suspect. Moreover, all theories conflict with some experiments, because not all experiments are right.” / LMK