Subsidizing renewables increases research and development

April 27th, 2014

The last post discussed Severin Borenstein’s findings that most justifications for renewables subsidies don’t make sense. David Popp’s Innovation and Climate Policy brings up a different reason, a market failure that might not adequately be addressed by simply pricing fossil fuels at a much higher level, to reflect their costs.

This post is California-centric. My state has invested heavily in renewables—is this a good choice, or can we do better?

Every major paper I’ve read on energy policy stresses the need for research and development (R&D), both government (basic research) and private investments (closer to market). This is necessary today so that tomorrow’s energy is cheaper. Every paper I have read on energy says that R&D needs to be increased significantly, that we will pay dearly tomorrow for failing to invest enough today. In an article on the hearing for current Secretary of Energy Moniz, the Washington Post provided more information:

Spending for R&D should be greater
The Washington Post looks at the level of investments in R&D in a number of charts; here, expenditures are well below International Energy Association recommendations.

Spending over time
We’re also spending much less on energy than in the 1970s. (Energy research funds doubled in real terms between 1973 and 1976, and almost doubled again by 1980. The Arab oil embargo began in October 1973.)

Federal R&D can be increased by direct expenditure (although we choose not to, much), but not so for private R&D. What to do? Popp says that using subsidies and mandates to make renewables more attractive today, more than they would be if a huge cost were added to GHG emissions, is important, because this leads to large private R&D increases, and is an investment in our energy future.

This may be, but I found his arguments unpersuasive. Let me know what I missed. I don’t know that Popp reaches conclusions that are wrong in any way, but that it may make sense to explore the issues more completely.

First, how much money goes to energy R&D?

US DOE R&D
This is the amount of R&D paid for by the Department of Energy (DOE), from 1990 – 2013, about $6 billion for energy in 2013.

Popp errs in saying that DOE 2008 energy R&D was $4.3 billion, and that 23% went to nuclear. The chart shows this is clearly not the case—Popp does not differentiate between fission (the current method) and fusion, decades in the future, or civilian and military research. Globally, Popp says, 39% of $12.7 billion went to nuclear, and only 12% to renewables. Looking at the numbers in greater detail gives a different picture. For example, while it is true that in 2005, over 40% of global energy R&D is fission and fusion, taking out Japanese research (mostly for its breeder reactor), and French research (Areva is government owned), total fission research in 2005 was $308 million, less than 1/3 that spent on renewables.

What should R&D focus on?

Is private R&D currently targeted in a way that makes sense? Where are the biggest deficiencies?

I did not see a treatment of this question. Severin Borenstein says that it is important for California to focus not just on reducing its own emissions, but finding solutions that are important for the world.

The number one solution in Intergovernmental Panel on Climate Change Working Group III, the number one solution in various International Energy Agency reports, is increased efficiency (in power plants, transmission and distribution, cars, heating and cooling, appliances, etc). Carbon capture and storage (CCS) is very high on the list of technologies for making electricity. This is because CCS can work with existing fossil fuel plants which are likely to be in operation for decades, and with industries such as steel which are energy intense but do not use electricity. Additionally, CCS can be used with bionergy to take carbon dioxide out of the air. Carbon capture and storage is not a source of energy but a number of methods that reduce significantly carbon dioxide released when burning fossil fuels or bioenergy.

There are a number of other low-GHG solutions, including nuclear, solar, and wind, and to a lesser extent, geothermal. By implying that the R&D budget for nuclear is already high, Popp appears to imply that it is sufficient. Even if an R&D budget is large, how do we determine what constitutes a sufficient funding?

• Size of the current and historical R&D budget, both public and private, is not the only criterion—huge amounts have been spent on solar panel (photovoltaic, or PV) R&D over the decades, and PVs still are not able to compete with fossil fuels or nuclear power without huge subsidies. PVs need more R&D than wind and nuclear to get to affordable—solar has greater R&D needs. How do we evaluate needs, and choose among sources? Does it make sense to consider wind and solar together?

• The role the solution will play in the future is also important, as is the timing of the solutions. International Energy Agency has been warning for years that CCS research especially needs to be done on a much faster time scale (see any executive summary of their Energy Technology Perspectives). How do we select among CCS, nuclear, and wind and solar power?

• Other states and countries are subsidizing some solutions—Germany today, and Spain in the past, are among a number of countries and states that have spent vast sums subsidizing renewables, so perhaps California might consider investing in other sources.

Does this method work particularly well compared to other methods of encouraging private R&D?

California has a long history of mandating technology change first. The first Environmental Protection Agency was here, so California has the right to set its own smog standards, fuel efficiency standards, etc. Thank goodness, because our smog standards were earlier, and more stringent.

We also have a history of programs to push the technology. California mandated electric cars in 1990, and participated in a partnership beginning in 2000 to encourage fuel cell buses, with both federal and state funding. What does economics literature say about the success for technology still far in the future? Are affordable solar panels near enough that private R&D is a good investment?

Are there other methods that would be more successful, such as funding research hubs, or giving grants to various R&D projects? Is there not yet enough history to choose among methods? (Presumably most or all work better than the current alternative of underfunding R&D.)

Summary

It is clear that U.S. governmental and private investment in energy R&D is too small. David Popp’s Innovation and Climate Policy discusses how to increase private investment, but insufficient information is provided about what economists know about encouraging R&D.

Of all the reasons to subsidize and mandate renewables, many listed in the previous post, the need to encourage private R&D makes the most sense. However, it seems to me important to treat a number of questions in more detail. These include:
• How much of our goal is to push R&D? to meet local needs vs global needs?
• How do we choose between CCS, nuclear, solar, wind and other clean energy technologies?
• Is this method of encouraging R&D likely to be most fruitful, or are there better alternatives?

Lightly edited for clarity

Most popular reasons for subsidizing renewables don’t make sense

April 27th, 2014

Intergovernmental Panel on Climate Change recently released a set of major reports. Working Group I said that we must limit severely the amount of greenhouse gases we release if we are to stay below 2°C increase over pre-industrial times. Working Group II discussed changes we will see, and could see, if we do not meet this goal. Working Group III said that if we make good choices about how to reduce GHG emissions, mitigation could be fairly cheap.

With that in mind, it makes sense to look at policies both in place and planned. One such policy is direct subsidies and mandates, an indirect subsidy, to renewables, particularly wind and solar. Wind and solar are expected to be important in the 2050 time frame, with solar and wind together expected to supply between 20 and 60% of electricity capacity, depending on the region, according to the 2012 International Energy Agency Energy Technology Perspectives. (Capacity refers to the amount of electricity that can be produced at maximum operation. Because wind and solar power have a lower capacity factor than nuclear or fossil fuels, their actual contribution will be much lower.)

Currently, the majority of U.S. states and a number of countries subsidize or/and mandate renewables. They usually subsidize capacity (paying per megawatt built) or electricity (per kWh), or require a set percentage of power to come from renewables (33% in California by 2020, although some can be built out of state).

Renewables include energy from hydroelectric (although some governments don’t count large hydro); biomass or bioenergy—generally agriculture waste and landfill gas, although future plans include dedicated crops; wind; geothermal; sun; and tidal or other forms of marine power. (Geothermal is not technically renewable on a human time scale, as wells tap out.) The sun is used to make electricity using two different technologies: making steam, which is then used in the same way fossil fuel or nuclear steam is used, and solar panels, or photovoltaics (PV). Renewables can be used for electricity, heat (particularly sun and biomass) or transport (particularly biofuels). These two posts will examine only renewables used for electricity.

In many places, renewables subsidies exist because they are politically acceptable, and solutions economists favor are not. Highest on economists’ list is adding a steep cost through a tax or cap and trade program. Upcoming blogs will examine why, and the differences between these. The current so-called social cost of fossil fuels, the cost society pays, but the polluter does not, is $37/ton carbon dioxide. (A number of economists say this is too low, and give wonk reasons to support their thinking. They say, in part, that “because the models omit some major risks associated with climate change, such as social unrest and disruptions to economic growth, they are probably understating future harms.” This short article is worth reading.)

Assuming we adopt a tax or cap and trade: will it make sense to continue subsidizing or/and mandating renewables? Economists ask: is there a market failure that cannot be addressed sufficiently by including the steep cost of greenhouse gas (GHG) emissions in the price, that will require other policy interventions? Internalizing a social cost of $37, adding 4 cents or so/kWh of coal power, about half that to natural gas power, makes renewables more attractive because fossil fuels are more expensive. However, the goal is not to make renewables more attractive, but to solve a number of market problems. Economists don’t see their goal as favoring certain solutions, but making the market work better by removing market failures. Of these, the most important is the failure to price fossil fuel pollution correctly.

Two papers I read recently look at direct or indirect subsidies to renewables. Severin Borenstein, in The Private and Public Economics of Renewable Electricity Generation, finds most arguments made in favor of renewables subsidies do not make stand up under scrutiny. (See next post for a discussion of the other paper.)

Wind is currently close to fossil fuels in price, but wind has several disadvantages (it is non-dispatchable, tends to blow more when it is less needed, and requires expensive and GHG-emitting backup power). While coal and natural gas do get subsidies, these are a fraction of a cent/kWh. Renewables get hefty federal subsidies, such as 2.1 cents/kWh for wind, and more for solar. [These are supplemented by state subsidies.]

What benefits of renewables are not addressed by adding a cost to greenhouse gases, and so justify subsidies to renewables? Borenstein looks at a number of assertions:

• Some renewables decrease pollution other than greenhouse gases, pollutants that damage human health (as well as ecosystems and agriculture).

This is an additional important part of polluter pay not being considered today. However, this cost to human health, etc. is more variable, and depends on population density, climate and geography. General subsidies for renewables would not make sense—subsidies must target power plants doing the most damage. That is not done today.

• Increasing the use of renewables will increase energy security, because the U.S. will produce more of its own electricity.

Since the U.S. uses U.S. coal and natural gas, U.S. rivers, and so on, this argument doesn’t seem to apply here (it might apply elsewhere). This argument could apply to oil imports, and electric cars, if they are successful, could replace imported oil. However, coal and natural gas are cheaper, they would be more effective than renewables in replacing oil in transportation, and renewables have no inherent advantage.

• Subsidies for renewables will lead to more learning by doing, and this will lead to lower prices.

However, this subsidy for renewables only is appropriate if society benefits rather than the particular company. And there appears to be little support that this oft-cited factor has been important to the decrease in solar panel prices over time. There is more support for evidence that technology progress in the space program and semiconductors, as well as an increase in the size of solar companies, has had more effect.

• Green jobs will follow, as renewables require more workers (or/and more workers among the unemployed and underemployed).

This statement has two components. There is uneven support for the idea that renewables and energy efficiency employ more people than other fields of energy. They may even target workers who have more trouble getting work, a social benefit. The longer term argument is that this will build a renewables industry, although evidence in Germany and Spain does not appear to support this idea. Studies might provide support for one or both ideas.

• Lower costs for fossil fuels will follow decreases in the cost of competing forms of energy.

The evidence for this is scarce.

Summary

The main justification discussed so far for renewables subsidies over adding a cost to greenhouse gas emissions appears to be that society allows subsidies and does not allow a tax. The next post will examine an additional reason, the role subsidies play in increasing research and development.

IPCC on Mitigation: Which technologies help reduce GHG emissions the most?

April 16th, 2014

Intergovernmental Panel on climate Change Working Group 3 (Mitigation) has produced their update from 2007. This post looks at electricity technologies.

Summary, also comments
• Improving efficiency much more rapidly in all sectors (energy production, distribution, and use) is crucial.
• Carbon capture and storage (CCS) is the single biggest addition to business as usual, if our goal is to keep atmospheric levels of CO2-equivalent below 450 ppm, or even 550. Plans for research and development, and deployment, should proceed rapidly and aggressively. (This will be aided by adding a cost to greenhouse gas emissions to cover their cost to society.)
• Bioenergy is the next most important technology. The “limited bioenergy” scenario increases bioenergy use to 5.5 times 2008 levels by 2050, and it will be very costly if we restrict bioenergy to that level. There are a number of concerns about how sustainable this path will be, and with larger increases in temperatures, the quantity of biomass available for electricity and fuels.
• If we don’t get our act together a few years ago, we will be using bioenergy and carbon capture and storage together. Bioenergy will take carbon dioxide out of the atmosphere and CCS will put it into permanent storage. This will cost several hundred dollars/ton CO2, and is relatively cheap compared to the alternative (climate change), although much more expensive than getting our act together.
• To the extent that we add nuclear, wind, and solar as rapidly as makes sense, we reduce dependence on bioenergy.
• Given the relative importance of carbon capture and storage, it may make sense for nations (e.g., Germany) and states (e.g., California) that want to jump start technologies to focus more on CCS than on renewables.
• Do all solutions, now.

Which solutions for electricity are important?
IPCC provides an answer by calculating the cost of failing to use the solution.

Efficiency
Efficiency remains the single largest technology solution, along with shifting to best available technologies: more efficient power plants, cars, buildings, air conditioners, and light bulbs. Since so many buildings are being constructed, and power plants built, with a lifetime of decades, early implementation of efficiency makes the task possible. The cost of failing to add efficiency sufficiently rapidly isn’t calculated in the Summary for Policymakers, but assume it’s more than we want to pay.

Nuclear and Solar + Wind
Can we do without renewables or nuclear power? Looking at only the attempt to reduce greenhouse gas emissions from electricity, IPCC found that opting out of nuclear power would increase costs by 7% in the 450 parts per million CO2-equivalent goal, and limiting wind and solar would increase costs by 6%. To stay below 550 ppm CO2-eq, costs would go up 13% (of a much smaller cost) without nuclear, and 8% without solar and wind. Crossing either off the list looks pretty unattractive.

Go to Working Group 3 for the full report, and the technical summary. IPCC also has an older report, Special Report on Renewable Energy Sources and Climate Change Mitigation.

Caveats re nuclear:

Nuclear energy is a mature low-GHG emission source of baseload power, but its share of global electricity generation has been declining (since 1993). Nuclear energy could make an increasing contribution to low-carbon energy supply, but a variety of barriers and risks exist (robust evidence, high agreement). Those include: operational risks, and the associated concerns, uranium mining risks, financial and regulatory risks, unresolved waste management issues, nuclear weapon proliferation concerns, and adverse public opinion (robust evidence, high agreement). New fuel cycles and reactor technologies addressing some of these issues are being investigated and progress in research and development has been made concerning safety and waste disposal.

I’m not sure what risks are associated with uranium mining.

Caveats re renewable energy (RE), excluding bioenergy:

Regarding electricity generation alone, RE accounted for just over half of the new electricity generating capacity added globally in 2012, led by growth in wind, hydro and solar power. However, many RE technologies still need direct and/or indirect support, if their market shares are to be significantly increased; RE technology policies have been successful in driving recent growth of RE. Challenges for integrating RE into energy systems and the associated costs vary by RE technology, regional circumstances, and the characteristics of the existing background energy system (medium evidence, medium agreement).

Note: the capacity factor for intermittents like wind and solar (and sometimes hydro), the percentage of electricity actually produced compared to what would be produced if the source ran at maximum capacity 24/7, is dramatically less than for nuclear and other sources of electricity. Added capacity (gigawatts brought online) does not communicate how much electricity, GWh, was added. The increase in coal in 2012 was 2.9% (see coal facts 2013). Since coal was 45% of 2011 electricity, the increase in coal was far greater than the increase in wind and solar combined.

Bioenergy
The use of bioenergy increases dramatically in all scenarios. If we limit bioenenergy to 5.5 x 2008 levels, costs rise 64% (18% of a smaller number for 550 ppm scenario). Bioenergy, using plants for fuels and power (electricity) dominates high renewables scenarios.

Caveats re bioenergy:

Bioenergy can play a critical role for mitigation, but there are issues to consider, such as the sustainability of practices and the efficiency of bioenergy systems (robust evidence, medium agreement). Barriers to large scale deployment of bioenergy include concerns about GHG emissions from land, food security, water resources, biodiversity conservation and livelihoods. The scientific debate about the overall climate impact related to landuse competition effects of specific bioenergy pathways remains unresolved (robust evidence, high agreement). Bioenergy technologies are diverse and span a wide range of options and technology pathways.

There are many concerns about the amount of bioenergy—biopower and biofuels. With so many demands on land facing an increasing population wishing to supply food, fiber, green chemicals, etc in a world with a rapidly changing climate, sustainability is not foreordained. The Special Report on Renewable Energy Sources and Climate Change Mitigation estimates the net effect on yields to be small worldwide at 2°C, although regional changes are possible. By mid-century, temperature increase over pre-industrial could be more than 2°C, and yields are more uncertain.

Note: fusion energy is not mentioned in this short summary, but such strong dependence on bioenergy gives an idea why there is so much research on somewhat speculative sources of energy.

Carbon Capture and Storage
Carbon capture and storage (CCS) provides even more of the solution—costs go up 138% if we do without carbon capture and storage for the 450 ppm scenario (39% of a smaller number for 550 ppm scenario). Part of the attraction of CCS is that it can help deal with all the electricity currently made using fossil fuels. A number of countries are heavily invested in fossil fuel electricity, and a smaller number of countries, from China to Germany, are adding coal plants at a rapid rate, and will likely be reluctant to let expensive capital investments go unused. Additionally, as International Energy Agency (IEA) points out, almost half of carbon capture and storage is aimed at decarbonizing industry: steel, aluminum, oil refineries, cement, and paper mills use fossil fuel energy directly. Nuclear is often not practical in such situations, and wind and solar rarely are.

BECSS—Bioenergy and CCS
One of the cheaper ways to take carbon out of the atmosphere is to combine bioenergy and carbon capture and storage. Using plant matter to make electricity is nearly carbon neutral, as plants take carbon dioxide out of the atmosphere to grow, and release it back when they are burned for electricity or fuel. However, CCS can store that CO2 permanently. Cheaper is relative to the costs of climate change, as the cost is expected to be several hundred dollars/ton. This method is much more expensive than other methods of addressing climate change that are currently underutilized.

The International Energy Association booklet, Combining Bioenergy with CCS, discusses the challenges of ascertaining whether the biomass was grown sustainably.

Note: IEA gives some sense of how rapidly CCS should come online:

Goal 1: By 2020, the capture of CO2 is successfully demonstrated in at least 30 projects across many sectors, including coal- and gas-fired power generation, gas processing, bioethanol, hydrogen production for chemicals and refining, and DRI. This implies that all of the projects that are currently at an advanced stage of planning are realised and several additional projects are rapidly advanced, leading to over 50 MtCO2 safely and effectively stored per year.

Goal 2: By 2030, CCS is routinely used to reduce emissions in power generation and industry, having been successfully demonstrated in industrial applications including cement manufacture, iron and steel blast furnaces, pulp and paper production, second-generation biofuels and heaters and crackers at refining and chemical sites. This level of activity will lead to the storage of over 2 000 MtCO2/yr.

Goal 3: By 2050, CCS is routinely used to reduce emissions from all applicable processes in power generation and industrial applications at sites around the world, with over 7 000 MtCO2 annually stored in the process.

How much more low-GHG electricity is needed?
IPCC says 3 – 4 times today’s level by 2050. About 33% of today’s electricity is low-GHG, so by mid-century, more electricity than we make today will need to come from fossil fuel and bioenergy with CCS, nuclear, hydro, wind, solar, and other renewables.

For some of the challenges to rapidly increasing reliance on any energy source, see David McKay’s Sustainable Energy—Without the Hot Air.

Uncertainty and climate change adaptation—Part 1, Transportation

February 19th, 2014

Uncertainty is sometimes our friend, but not for climate change.

We don’t know:

? How much greenhouse gas will we choose to emit?
? How much will a particular quantity of GHG warm the Earth?
? How will that increase in Earth’s temperature change the weather (average temperatures, and ranges? average precipitation, and ranges?)
? How do we prepare for a future we’re not sure of when we find it so challenging to prepare for current realities?

Future posts will look at other challenges to adaptation—water availability, storm surges, agriculture, and ecosystems. This post will focus on transportation.

• Where do we locate new roads, and when do begin to move the current ones? San Francisco sees a threat to the Great Highway, with NASA predicting a sea level increase of 16″ (40 cm) by mid-century, and 55″ (140 cm) by the end of the century. In Alaska, roads are buckling as the permafrost melts, and in some areas, road access has been reduced to 100 days, down from 200.

• How will travel preferences change as costs are added to greenhouse gas emissions, making travel by bus and train more attractive, relative to travel by car. In many places, transportation infrastructure is being built for business-as-usual scenarios that assume no behavior switching. Even reallocating lanes in existing infrastructure, perhaps to give buses more priority, can engender controversy.

• What temperatures should roads be designed for? Freeways buckled in Germany when temperatures reached 93°F (34°C), resulting in accidents and one death.

Buckling highways

Buckling Highways: German Autobahns Can’t Stand the Heat

• How will trains cope with climate change? Floods are a problem (Amtrak didn’t provide service between Denver and Chicago for weeks in 2008 due to floods). Heat is as well. Amtrak had a heat solution: require speeds to stay below 80 mph (130 kph) when temperatures exceeded 95°F (35°C). Unfortunately, as described in Changes in Amtrak’s Heat Order Policy,

The impact on schedule performance and track capacity was substantial, considering that the Northeast Corridor (NEC) handles up to 2,400 trains per day at speeds up to 150 MPH. The disruption was attributed to increased running times, trains arriving at key capacity choke points out of sequence, and inability to turn trains consists in a timely manner at terminals.

For now, Amtrak is working to establish a better protocol for heat triggers, but at some point, there will be a number of days each year in a number of locations where today’s train infrastructure won’t work with the new temperatures.

heatwave in Australia

Heatwave in Australia

The National Academy of Sciences discusses five climate changes expected to have important effects on transportation in Potential Impacts of Climate Change on U.S. Transportation:

• Increases in very hot days and heat waves,
• Increases in Arctic temperatures,
• Rising sea levels,
• Increases in intense precipitation events, and
• Increases in hurricane intensity

Naturally, NAS has some recommendations. Does transportation decision-making in your region incorporate their ideas, or other similar ideas, into planning?

Finding: The past several decades of historical regional climate patterns commonly used by transportation planners to guide their operations and investments may no longer be a reliable guide for future plans. In particular, future climate will include new classes (in terms of magnitude and frequency) of weather and climate extremes, such as record rainfall and record heat waves, not experienced in modern times as human-induced changes are superimposed on the climate’s natural variability.

Finding: Climate change will affect transportation primarily through increases in several types of weather and climate extremes, such as very hot days; intense precipitation events; intense hurricanes; drought; and rising sea levels, coupled with storm surges and land subsidence. The impacts will vary by mode of transportation and region of the country, but they will be widespread and costly in both human and economic terms and will require significant changes in the planning, design, construction, operation, and maintenance of transportation systems.

Recommendation 1: Federal, state, and local governments, in collaboration with owners and operators of infrastructure, such as ports and airports and private railroad and pipeline companies, should inventory critical transportation infrastructure in light of climate change projections to determine whether, when, and where projected climate changes in their regions might be consequential.

Finding: Potentially, the greatest impact of climate change for North America’s transportation systems will be flooding of coastal roads, railways, transit systems, and runways because of global rising sea levels, coupled with storm surges and exacerbated in some locations by land subsidence.

Recommendation 2: State and local governments and private infrastructure providers should incorporate climate change into their long-term capital improvement plans, facility designs, maintenance practices, operations, and emergency response plans.

Finding: The significant costs of redesigning and retrofitting transportation infrastructure to adapt to potential impacts of climate change suggest the need for more strategic, risk-based approaches to investment decisions.

Recommendation 3: Transportation planners and engineers should use more probabilistic investment analyses and design approaches that incorporate techniques for trading off the costs of making the infrastructure more robust against the economic costs of failure. At a more general level, these techniques could also be used to communicate these trade-offs to policy makers who make investment decisions and authorize funding.

Finding: Transportation professionals often lack sufficiently detailed information about expected climate changes and their timing to take appropriate action.

Recommendation 4: The National Oceanic and Atmospheric Administration, the U.S. Department of Transportation (USDOT), the U.S. Geological Survey, and other relevant agencies should work together to institute a process for better communication among transportation professionals, climate scientists, and other relevant scientific disciplines, and establish a clearinghouse for transportation-relevant climate change information.

Finding: Better decision support tools are also needed to assist transportation decision makers.

Recommendation 5: Ongoing and planned research at federal and state agencies and universities that provide climate data and decision support tools should include the needs of transportation decision makers.

Finding: Projected increases in extreme weather and climate underscore the importance of emergency response plans in vulnerable locations and require that transportation providers work more closely with weather forecasters and emergency planners and assume a greater role in evacuation planning and emergency response.

Recommendation 6: Transportation agencies and service providers should build on the experience in those locations where transportation is well integrated into emergency response and evacuation plans.

Finding: Greater use of technology would enable infrastructure providers to monitor climate changes and receive advance warning of potential failures due to water levels and currents, wave action, winds, and temperatures exceeding what the infrastructure was designed to withstand.

Recommendation 7: Federal and academic research programs should encourage the development and implementation of monitoring technologies that could provide advance warning of pending failures due to the effects of weather and climate extremes on major transportation facilities.

Finding: The geographic extent of the United States—from Alaska to Florida and from Maine to Hawaii—and its diversity of weather and climate conditions can provide a laboratory for identifying best practices and sharing information as the climate changes.

Recommendation 8: The American Association of State Highway and Transportation Officials (AASHTO), the Federal Highway Administration, the Association of American Railroads, the American Public Transportation Association, the American Association of Port Authorities, the Airport Operators Council, associations for oil and gas pipelines, and other relevant transportation professional and research organizations should develop a mechanism to encourage sharing of best practices for addressing the potential impacts of climate change.

Finding: Reevaluating, developing, and regularly updating design standards for transportation infrastructure to address the impacts of climate change will require a broad-based research and testing program and a substantial implementation effort.

Recommendation 9: USDOT should take a leadership role, along with those professional organizations in the forefront of civil engineering practice across all modes, to initiate immediately a federally funded, multiagency research program for ongoing reevaluation of existing and development of new design standards as progress is made in understanding future climate conditions and the options available for addressing them. A research plan and cost proposal should be developed for submission to Congress for authorization and funding of this program.

Recommendation 10: In the short term, state and federally funded transportation infrastructure rehabilitation projects in highly vulnerable locations should be rebuilt to higher standards, and greater attention should be paid to the provision of redundant power and communications systems to ensure rapid restoration of transportation services in the event of failure.

Finding: Federal agencies have not focused generally on adaptation in addressing climate change.

Recommendation 11: USDOT should take the lead in developing an interagency working group focused on adaptation.

Finding: Transportation planners are not currently required to consider climate change impacts and their effects on infrastructure investments, particularly in vulnerable locations.

Recommendation 12: Federal planning regulations should require that climate change be included as a factor in the development of public-sector long-range transportation plans; eliminate any perception that such plans should be limited to 20 to 30 years; and require collaboration in plan development with agencies responsible for land use, environmental protection, and natural resource management to foster more integrated transportation–land use decision making.

Finding: Locally controlled land use planning, which is typical throughout the country, has too limited a perspective to account for the broadly shared risks of climate change.

Finding: The National Flood Insurance Program and the FIRMs used to determine program eligibility do not take climate change into account.

Recommendation 13: FEMA should reevaluate the risk reduction effectiveness of the National Flood Insurance Program and the FIRMs, particularly in view of projected increases in intense precipitation and storms. At a minimum, updated flood zone maps that account for sea level rise (incorporating land subsidence) should be a priority in coastal areas.

Finding: Current institutional arrangements for transportation planning and operations were not organized to address climate change and may not be adequate for the purpose.

Recommendation 14: Incentives incorporated in federal and state legislation should be considered as a means of addressing and mitigating the impacts of climate change through regional and multistate efforts.

Part 2: Changes in water availability

Fukushima update—The current state of F-D cleanup, part 5

November 19th, 2013

The previous two posts in this series looked at a number of concerns from the anti-nuclear community, and some newspapers that should know better, and found no evidence for their concerns. However, concerns about how Japan and Tepco are doing have been expressed by more credible sources. This is an update on those concerns, mostly about water. What I learned while researching this is that they are not about safety, but about reassuring the public, and doing the project right—whether or not safety is an issue.

The first rather lengthy section comes from an article by an adviser to the Japanese with experience in the U.S. cleanup after Three Mile Island. This is followed by some short sections linking to high level criticisms over Japanese handling of the Fukushima accident—some recent, some older.

Lake Barrett, Tepco Adviser, writes about the problems in Japan

Lake Barrett has been brought in by Tokyo Electric Power (Tepco) as an advisor on cleanup of the Fukushima Dai-ichi accident. He headed the Three Mile Island Cleanup Site Office for Nuclear Regulatory Commission (NRC) from 1980 to 1984, in the years immediately after the 1979 accident.

In an article in Bulletin of the Atomic Scientists, Barrett summarizes the current state of the accident. Bottom line—the Japanese have done heroic work so far. They have to deal with a number of water issues. The problems are much more about public confidence than safety, and Japan coming to terms with how much of its admittedly significant resources to spend on relatively minor issues. Here are a few more details:

• Even accidents that have low health impacts such as the Fukushima accident can be socially disruptive and have huge cleanup costs.

• In the U.S. (and presumably elsewhere), multibillion dollar improvements were implemented after both Three Mile Island and F-D.

• Contaminated water is part of the mess of cleanup, more at F-D than at TMI. While the contamination is

at a very low level and presents little risk to the public or the environment… [still it can be] significant from a public-confidence perspective. So it is vitally important that Japan have a comprehensive accident cleanup plan in place that is not only technically protective of human health and the environment, but is also understood to be protective by the public…

[Tepco] has worked hard and has indeed contained most of the significant contamination carried by water used to cool the plant’s damaged reactor cores. Still, a series of events—including significant leakage from tanks built to hold radioactive water—has eroded public confidence….[The plan used] needs to include a new level of transparency for and outreach to the Japanese public, so citizens can understand and have confidence in the ultimate solution to the Fukushima water problem, which will almost certainly require the release of water—treated so it conforms to Japanese and international radioactivity standards—into the sea…

While most of the highly contaminated water has been dealt with, Tepco and the Japanese government are “having great difficulty in managing the overall contaminated-water situation, especially from a public-confidence perspective. The engineering challenge—control of a complex, ad hoc system of more than 1,000 temporary radioactive water tanks and tens of miles of pipes and hoses throughout the severely damaged plant—is truly a herculean task. Explaining what is going on and what has to be done to an emotional, traumatized, and mistrusting public is an even larger challenge.

The politics of the solutions are more challenging than the technical solutions.

• The technical aspects of the problems are mostly about water:
—340,000 tons (cubic meters)/90 million gallons of radioactive water stored in more than 1,000 tanks. Most of the radioactivity, the cesium-134 and cesium-137, as well as oils and salts, have been removed, and this water is being recycled back into the cores to continue cooling them. The current method of cleaning the water does not remove strontium.
—Ground water is leaking into the reactor cores (this is where most of the 340,000 tons of stored water comes from).

This building-basement water is the highest-risk water associated with the Fukushima situation. That water is being handled reasonably well at present, but because of the constant in-leakage of groundwater, some ultimate disposition will eventually be necessary. [A system of cleaning the water is now being tested.] In fact, I am writing this article while sitting on an airplane, and I am receiving more ionizing radiation from cosmic rays at this higher altitude than I would receive from drinking effluent water from the Advanced Liquid Waste Processing System.

—Also of concern,

water flowed into underground tunnels that connect buildings at the plant, and into seawater intake structures. These many tunnels contain hundreds, if not thousands, of pipes and cables. Most of these were non-safety grade tunnels that were cracked by the earthquake. In March and April 2011, therefore, fairly large volumes of highly contaminated water likely flowed into the ground near the sea and, at some points, directly into the sea….Although the amount of radioactivity in this groundwater is only a very small fraction of what was released in March and April 2011, this contamination has become an emotional issue, because the public believes it had been told the leakage was stopped. It is in fact true that the gross leakage of highly contaminated water from Fukushima buildings and pipes has been stopped. Still, approximately 400 tons (105,000 gallons) of groundwater per day is moving toward the sea from these areas, and it contains some contamination from these earlier leakage events. The amount of radioactivity in this water flow does not represent a high risk; the concentrations are generally fairly low…Regardless of the relatively low concentration of radioactive contaminants and Tepco’s efforts at containment, the water entering the sea in an uncontrolled manner is very upsetting to many people.

—Cesium which settled on the soil in the early days of the accident will be washed into the ocean; Tepco can’t prevent this large volume, low radioactivity transfer to the ocean. This is 600 tons (155,00 gallons) per day.

• Tepco can do better, with some suggestions. But bottom line:

Enormous amounts of scarce human and financial resources are being spent on the current ad hoc water-management program at Fukushima, to the possible detriment of other high-importance clean up projects. Although Japan is a rich country, it does not have infinite resources. Substantial managerial, technical, and financial resources are needed for the safe removal of spent nuclear fuel from the units 1, 2, 3, and 4 spent fuel pools, and to develop plans and new technologies for eventually digging out the melted cores from the three heavily damaged reactor buildings. Spending billions and billions of yen on building tanks to try to capture almost every drop of water on the site is unsustainable, wasteful, and counterproductive. Such a program cannot continue indefinitely…I see no realistic alternative to a program that cleans up water with improved processing systems so it meets very protective Japanese release standards and then, after public discussion, conducts an independently confirmed, controlled release to the sea.

Videos of current Tepco plans

Cleanup plans

Plans for spent fuel pool 4

A number of Japanese reports were highly self-critical

Reports issued by different levels of the Japanese government and various regulatory bodies and academics saw the Japanese culture of safety as inadequate; this, as well as a once in a millennium tsunami, led to the accident. A number of reports emphasized that the Japanese had failed to learn from major accidents such as Three Mile Island in the U.S. and the flooding of the French Blayais nuclear plant in 1999. Nor had they seen it as necessary to make improvements incorporated over time in other countries.

Atsuyuki Suzuki, former president of the Japanese Atomic Energy Agency, and now senior scientific adviser to his successor, listed some of these reports in a talk at UC, Berkeley (start around 8 minutes for a longer list):
• country specific-groupthink with consensus first, overconfidence, etc (Parliament)
• human caused disaster, lack of emergency preparedness (government)
• lack of safety consciousness, ignoring both natural events and worker training (academic)

Suzuki emphasized the reluctance to learn from accidents and insights in other countries, as well as “undue concerns about jeopardizing local community’s confidence if risks are announced” and the “regulator’s difficult position due to the public perception that the government must be prevailingly correct at every moment.” He also talked about the time it takes to move to a safety culture.

We are now seeing outreach to non-Japanese experts. Japanese Deputy Foreign Minister Shinsuke Sugiyama and U.S. Deputy Secretary of Energy Daniel Poneman met November 4, 2013 in the second of a series of meetings to establish bilateral nuclear cooperation. In addition to Lake Barrett, Dale Klein (former head of the U.S. NRC), Barbara Judge, former head of the UK Atomic Energy Authority, and others have been invited as advisers. Time will tell if this cooperation continues, and if Japan incorporates improvements in parallel with those required in other countries.

Outside criticisms of Japanese and Tepco management of the cleanup, and of communication

There is general agreement that the Japanese government was trained at the anti-Tylenol school of disaster communication.

World Nuclear Association posted August 28, 2013, about Japan’s Nuclear Regulation Authority’s failure to listen to International Atomic Energy Agency—NRA went back and forth on how to rate an incident, rating an incident 3 which should have been a 1 or 0 (incidents are rated from 1 to 3 on the International Nuclear Events Scale, or INES; accidents from 4 to 7):

“In Japan we have seen a nuclear incident turn into a communication disaster,” said Agneta Rising, Director General of the World Nuclear Association. “Mistakes in applying and interpreting the INES scale have given it an exaggerated central role in coverage of nuclear safety.” WNA noted that the leakage from a storage tank “was cleared up in a matter of days without evidence of any pollution reaching the sea.” “However, news of the event has been badly confused due to poor application and interpretation of the International Nuclear Event Scale (INES), which has led to enormous international concern as well as real economic impact.” The regulator’s misuse of the International Nuclear Event Scale ratings “cannot continue: if it is to have any role in public communication, INES must only be used in conjunction with plain-language explanations of the public implications – if any – of an incident,” said Rising.

WNA urged Japan’s Nuclear Regulatory Authority to listen to the advice it has received from the International Atomic Energy Agency: “Frequent changes of rating will not help communicate the actual situation in a clear manner,” said the IAEA in a document released by the NRA. The IAEA questioned why the leak of radioactive water was rated as Level 3 on the INES scale: “The Japanese Authorities may wish to prepare an explanation for the media and the public on why they want to rate this event, while previous similar events have not been rated.” Since then the NRA has admitted that the leak could have been much smaller than it said, and also it transpires that the water in the tank was 400 times less radioactive than reported (0.2 MBq/L, not 80 MBq). The maximum credible leakage was thus minor, and the Japan Times 29/8 reports the NRA Chairman saying “the NRA may reconsider its INES ranking should further studies show different amounts of water loss than those provided by Tepco.” The last three words are disingenuous, in that Tepco had said that up to 300 m3 might have leaked, it was NRA which allowed this to become a ‘fact’. Maybe back to INES level 1 or less for the incident.

Since the leak was discovered, each announcement has been a new media event that implied a worsening situation. “This is a sad repeat of communication mistakes made during the Fukushima accident, when INES ratings were revised several times,” said Rising. “This hurt the credibility of INES, the Japanese government and the entire nuclear sector – all while demoralising the Japanese people needlessly.” “INES will continue to be used ….. but it represents only one technical dimension of communication and that has now been debased.”

There were concerns about whether, and how effectively, Japan was requesting help, in this case on the permafrost project:

The Japanese firms involved appear to be taking a go-it-alone approach. Two weeks ago, a top official at Tokyo Electric Power (Tepco) signaled that the utility behind the Fukushima disaster would seek international assistance with the Fukushima water contamination crisis. But experts at U.S.-based firms and national labs behind the world’s largest freeze-wall systems—and the only one proven in containing nuclear contamination—have not been contacted by either Tepco or its contractor, Japanese engineering and construction firm Kajima Corp.

There was high level concern about both planning and communication:

Tepco needs “to stop going from crisis to crisis and have a systematic approach to water management,” Dale Klein, the chairman of an advisory panel to Tepco and a former head of the U.S. Nuclear Regulatory Commission, said.

Appearing to believe that no one in Japan was explaining radioactivity to the Japanese, some outside experts discussed health issues with the Japanese public.

And all along there have been concerns about how the Japanese treat workers at the Fukushima and other nuclear plants. Worker pay at the Fukushima plant recently doubled to $200/day, and better meals will be provided. It appears to remain true that a majority of workers at Japanese power plants are picked up on street corners.

Getting to Safety

The story of how workers are treated has little to do with safety issues, apparently, although it’s harder to have most of your staff trained in safety procedures if they are irregular workers. It helps maintains an image of Tepco as a company that doesn’t care about employees. Contrast this with Alcoa’s experience under Paul O’Neill. As discussed by Charles Duhigg in The Power of Habit, O’Neill focused on safety. Managers went into self-protective mode and began asking workers how to make the workplace safer, and while they were talking, workers shared other ideas. Alcoa became highly profitable because of the focus on safety.

It takes a while to shift to a culture that emphasizes safety. The U.S. process has been aided by the Nuclear Regulatory Commission, which ordered very costly upgrades after Three Mile Island. The safer plants operated with fewer unplanned outages, and capacity factor, the percentage of time the plant is running, went from less than 60% in 1979 to 90% today. Since a very expensive capital investment which is not operating is an unprofitable investment, the effect of NRC’s regulations was to make the industry profitable. Additionally, the U.S. has another tool for improvement, Institute of Nuclear Power Operations. INPO describes their purpose:

INPO employees work to help the nuclear power industry achieve the highest levels of safety and reliability – excellence – through:

• Plant evaluations
• Training and accreditation
• Events analysis and information exchange
• Assistance

In the Q&A at the end of the Suzuki talk, one person asserted that INPO’s actions have been even more important than NRC’s, and I saw head-nodding. Suzuki’s response was that Japan is not ready for the current U.S. NRC/INPO path. [The idea is that just as people need to learn a variety of motions—sideways and falling—before learning complicated games like basketball, Japan has to spend time learning simpler skills, or unlearning skills that makes consensus work better in the country with such high population density.]

Summary

It takes years to shift to a culture of safety, and just become some industries adopt doesn’t mean others aren’t left behind. In the U.S., any number of industries are far from giving us a sense that they are safe—natural gas, oil refineries, and chemical industries all have worse records than nuclear power. But nuclear is held to different standards, and there will be world pressure on all nations with nuclear power to take international advice. They may not. We can hope that they do. And that a culture of safety spreads to other more dangerous industries.

Both getting to a culture of safety, and staying there, are helped by sensible decisions imposed by a regulatory body AND improvements in the workplace culture. This means more communication, more respect for workers, and workers who have a commitment to the company (not day labor). Nuclear utilities, in every country, would benefit from communication about best practices elsewhere, at both a regulatory level and a workplace level a la INPO in the U.S. It appears that pressure from the Japanese public and nuclear professionals outside Japan is moving the Japanese in this direction. There are studies focused on workplace cultures, with less superficial recommendations; hopefully utilities around the world are paying attention to these as well.

The Japanese social structure appears to encourage poor communication about risks beforehand, and gratuitous and expensive “protective actions” later, such as cleanup to a level far beyond what international organizations see as necessary. The effect is increasing public anxiety, and shifting money from important projects.

Over time, and with ongoing shifts in Japanese society (or at least the nuclear portion), the dangers of new accidents, and concerns about this one, will decrease. Money will be spent in ways that contribute more to society. And Japan can return to fighting climate change.

Part 1 Bottom line numbers
Part 2 The state of the evacuation, food and fish
Part 3 The plume and fish come to North America
Part 4 The history of predictions on spent fuel rods

Climate departure

October 20th, 2013

A new analysis by Camilo Mora, et al from University of Hawaii, projects the dates of climate departure,

when the projected mean climate of a given location moves to a state continuously outside the bounds of historical variability

compared to 1860 to 2005. This is the date when the coldest year is warmer than the warmest year in our past.

year of climate departure
Year of climate departure on our current emissions path

Worldwide the average year for climate departure is 2047. The effects on the tropics are more serious, not because temperature increases will be larger, but because normal variability is small. The mean year for the tropics under this scenario is 2038, later (2053) for cities outside the tropics.

One consequence is that poor areas appear to be suffering first, with one city in Indonesia likely to see climate departure within the decade (2020). Lagos (2029) and Mexico City (2031) are projected to reach this point within 2 decades; both have populations over 20 million. A number of large cities in the United States are expected to see temperatures rise to historically unknown levels by the late 40s and 50s. This is for the high greenhouse gas emissions scenario RCP8.5* (not the highest).

The Mora Lab site has projected dates for many more cities; check them out if your city isn’t listed.

The low greenhouse gas emissions scenario (RCP4.5*) delays average climate departure to 2069 but does not prevent it. The first affected area of Indonesia will see the date move up to 2025. Lagos won’t see climate departure until 2043, and Mexico City until 2050. A number of large cities in the US see their own dates delayed until the 2070s or even later.

The Mora paper discusses the effects on the ocean, which will see climate departure in the next decade or so. When considering temperature and acidity together, the ocean moved outside its normal variability in 2008.

For more information
Cities around the world
Model predictions
• Wondering when your favorite species will see a world outside normal variability?

Table Species group year of climate departure

SPECIES GROUP RCP8.5 RCP4.5
Marine Birds 2054 2084
Terrestrial Reptiles 2041 2087
Amphibians 2039 2080
Marine Mammals 2042 2077
Terrestrial Birds 2038 2082
Terrestrial Mammals 2038 2079
Plants 2036 2077
Marine Fish 2039 2073
Cephalopods 2038 2074
Marine Reptiles 2038 2074
Seagrasses 2038 2073
Mangroves 2035 2070
Coral Reefs 2034 2070

* Some more information on RCP4.5 and RCP8.5

Intergovernmental Panel on Climate Change has new scenarios, called Representative Concentration Pathways. The number, 4.5 or 8.5, represents the warming in 2100 in watts/sq meter, or W/m2. If the number is positive, that means there is still a net warming of Earth. The most optimistic scenario provided by IPCC in the latest report is RCP2.5, where the net flow of energy in is at the rate of 2.5 watts/sq meter, down from a peak a few decades from now at 3 W/m2. This scenario, considered perhaps too optimistic, is likely to keep temperature increase below 2°C. A more reasonable low estimate is RCP4.5, which will produce a temperature increase by the last two decade of the 21st century of close to 3°C over preindustrial times, and RCP8.5, our current trajectory, which could produce a temperature increase closer to 5°C.

RCP4.5 allows us to emit about 780 billion tons of carbon (carbon dioxide) between 2012 and 2100. RCP allows 1685 billion tons of carbon in the same period. (Multiply quantity of carbon by 44/12 to get quantity of carbon dioxide.) Carbon emissions in 2012 were 9.7 billion tons. Counting land use change, the number is even higher. The average rate of increase was 3.2%/year from 2000 to 2009 (doubling time = 22 years).

On the nature of science

October 18th, 2013

I posted a portion of the notebook used in the Friends General Conference 2013 workshop, Friends Process: Responding to Climate Change (Gretchen Reinhardt and I co-led it).

Go to On the Nature of Science to read more, leave comments here.

Topics:
• What is Science?
• Scientific Consensus
• How Scientists Communicate Results
• “But Scientists Are Always Changing Their Minds!”

Fukushima update—The history of predictions on spent fuel rods, part 4

October 3rd, 2013

Part 3 addresses dire but unsubstantiated warnings that North America is in danger from a radioactive plume and fish. This will focus on another set of warnings, that the spent fuel pool at Fukushima Dai-ichi could turn out to be a major problem for human health, perhaps much worse than Chernobyl. Some of the material below comes from a Truthout article citing a number of anti-nuclear experts, and its links. How likely are their predictions? Did these people make reliable predictions in the past?

Introduction: basic facts to get us started

• Nuclear power plants don’t blow up like bombs.
• Spent fuel pools store fuel which is completely spent, or relatively fresh fuel during maintenance. Water keeps the fuel rods cool, and protects us from radiation which can’t make it through 20 feet (7 meters) of water. Reactor 4’s spent fuel pool was unusually full. (See more here)
• The spent fuel pool for reactor 4 never blew up, and never dried up. There is no evidence that it is shaky.
• By April 2011, much was known:

Radionuclide analysis of water from the used fuel pool of Fukushima Dai-ichi unit 4 suggests that some of the 1331 used fuel assemblies stored there may have been damaged, but the majority are intact.

• There is a danger that spent fuel can catch fire. According to NUREG /CR-4982,

In order for a cladding fire to occur the fuel must be recently discharged (about 10 to 180 days for a BWR and 30 to 250 days for a PWR).

The Fukushima Dai-ichi reactors were BWRs, boiling water reactors. (More on NUREG series here.)

Incorrect predictions about the spent fuel pool began early

Paul Blustein writes about Nuclear Regulatory Commission (NRC) chair Gregory Jaczko’s recommendation that Americans evacuate if they were within 50 miles of the accident:

It was an honest mistake. On the morning of March 16, 2011, top officials of the U.S. Nuclear Regulatory Commission concluded that the spent fuel pool in Reactor No. 4 at Fukushima Dai-ichi must be dry.

Thus began an episode that had enormous implications for the trust that Japanese people have in their public officials. To this day, millions of Japanese shun food grown in the northeast region of their country; many who live in that area limit their children’s outdoor play, while others have fled to parts of Japan as far from Fukushima as possible. The reason many of them give is that they simply can’t believe what government authorities say about the dangers of radiation exposure.

The evidence that led a high official in the U.S. government to publicly attack the credibility of another government came from drone flyover sensing heat, but did not depend on multiple lines of evidence (was radioactivity especially high nearby? What fission products were seen downwind from the plant?)

By that evening, Jaczko’s subordinates were already starting to hedge their assessments about the pool when the chairman joined another conference call. The U.S. staffers in Tokyo had heard from Japanese investigators that even though the exterior wall protecting the pool appeared to be demolished, an interior wall was evidently intact; the Japanese offered other evidence as well.

Chuck Casto, the Tokyo-based team leader, related those points to Jaczko, saying he still wasn’t convinced even after seeing a video of what the Japanese claimed was water in the pool. To Casto it was “really inconclusive.” But he acknowledged that the video, taken from a helicopter 14 hours earlier, showed steam emissions.

Jaczko knew his error within 24 hours of publicly stating it, although the U.S. NRC waited 3 months to share this information. (Jaczko resigned in mid-2012 because of widespread unhappiness with his management style.)

And then in 2012

A year later, we hear again vague warnings about the dangers of the fuel pools, this time in the NY Times:

Fourteen months after the accident, a pool brimming with used fuel rods and filled with vast quantities of radioactive cesium still sits on the top floor of a heavily damaged reactor building, covered only with plastic.

The public’s fears about the pool have grown in recent months as some scientists have warned that it has the most potential for setting off a new catastrophe, now that the three nuclear reactors that suffered meltdowns are in a more stable state, and as frequent quakes continue to rattle the region….

[Or if we don't like that idea], Some outside experts have also worked to allay fears, saying that the fuel in the pool is now so old that it cannot generate enough heat to start the kind of accident that would allow radioactive material to escape.

The author cites “scientists” but never names them, gives no evidence they meet traditional assumptions we hold for the word scientist, nor provides a mechanism by which there might be problems now that the fuel rods have cooled down. It is a “he said, she said article”, and we are left to guess. For this article, at least, “outside experts” appear to know more than “some scientists”.

Robert Alvarez also warned us in 2012:

Spent reactor fuel, containing roughly 85 times more long-lived radioactivity than released at Chernobyl, still sits in pools vulnerable to earthquakes.” He warns of possible collapse from a combination of structural damage and another earthquake. “The loss of water exposing the spent fuel will result in overheating and can cause melting and ignite its zirconium metal cladding resulting in a fire that could deposit large amounts of radioactive materials over hundreds, if not thousands of miles.

Yet Tepco has continued to monitor the structural reliability of the spent pool fuels. A huge fire would be required before the radioactivity could be dispersed long distances, and per the introduction, it won’t occur just because it is exposed to the air. Is there some mechanism, up to and including nuclear bombs, that could actually disperse 85 x the radioactivity of Chernobyl? 1% of that amount? Neither Alvarez nor any writer appears to provide one.

Alvarez has experience working in nuclear weapons issues, in and out of government, but no science degree and he does not publish for scientists—I only checked his claims to have published in the journal Science and in Technology Review. Over time, I’ve learned to confirm assertions that people have published in respected journals (the latter is not peer reviewed). My search in Technology Review found nothing. It was not an article in Science (30 April 1982), which would have undergone peer review, but a letter to the editor. In it, Alvarez defends scientists who claim that low level radioactivity is 10-25 times worse than had been thought, a claim which long has few if any adherents.

Dan Yurman addresses Alvarez’s claims in a little more detail and links to a Tepco video of fuel pool 4.

Arnie Gundersen adds even more worries

It’s 2013, and Gundersen is adding erroneous details about what can go wrong:

Well, they’re planning as of November to begin to do it, so they’ve made some progress on that. I think they’re belittling the complexity of the task. If you think of a nuclear fuel rack as a pack of cigarettes, if you pull a cigarette straight up it will come out — but these racks have been distorted. Now when they go to pull the cigarette straight out, it’s going to likely break and release radioactive cesium and other gases, xenon and krypton, into the air. I suspect come November, December, January we’re going to hear that the building’s been evacuated, they’ve broke a fuel rod, the fuel rod is off-gassing.

I suspect we’ll have more airborne releases as they try to pull the fuel out. If they pull too hard, they’ll snap the fuel. I think the racks have been distorted, the fuel has overheated — the pool boiled – and the net effect is that it’s likely some of the fuel will be stuck in there for a long, long time.

I am struck by an image of Japan as a society with no skilled workers or robots, no cameras, trying to accomplish by itself a job that will lead to the Apocalypse if they are off by 1 mm, totally unaware that the job they are facing is complex. The image is not really coming into focus.

Harvey Wasserman adds to this description here:

According to Arnie Gundersen, a nuclear engineer with forty years in an industry for which he once manufactured fuel rods, the ones in the Unit 4 core are bent, damaged and embrittled to the point of crumbling. Cameras have shown troubling quantities of debris in the fuel pool, which itself is damaged.

The engineering and scientific barriers to emptying the Unit Four fuel pool are unique and daunting, says Gundersen. But it must be done to 100% perfection.

Should the attempt fail, the rods could be exposed to air and catch fire, releasing horrific quantities of radiation into the atmosphere. The pool could come crashing to the ground, dumping the rods together into a pile that could fission and possibly explode. The resulting radioactive cloud would threaten the health and safety of all us.

As discussed in the introduction, the pools did not boil. It’s long past the time when there is a possibility that cladding for the fuel rods could catch fire.

Some background is needed to understand how ridiculous the accusation that fission could result from the pool falling. Commercial nuclear reactors use water as a moderator (some use graphite). The basic idea is that uranium-235 fissions when hit by a neutron, and releases a number of neutrons, one of which makes it to another U-235 atom, causing it to fission. Because commercial nuclear reactors have relatively little U-235, they cannot go supercritical like a bomb. Moderators slow neutrons released when uranium fissions, because otherwise the neutron is moving too fast to cause another fission.

Spent fuel is put in water to cool it down, and the water is deep enough to prevent decay particles and fission fragments from making it out. But because water is a moderator, the rods are stored in borated racks. The racks control the geometry, keeping the fuel rods apart, and the boron absorbs neutrons, so that new fissions do not occur. The decay of all the fission fragments goes on for a few years; cooling is needed as those small fission products decay, producing heat.

For the Gundersen scenario of the pool crashing to the ground, dumping the rods together so that they could fission and possibly explode, the following would have to occur:
• structure breaks
• fuel rods fall in exactly the right geometry relative to each other
• the borated racks disappear, so there is no boron and no impediment to the fuel rods falling in the exact right geometry
• the fuel rods fall with exactly the right geometry into a pool of water, providing the needed moderator

The 100% perfection scenario Gundersen described, only everything must go perfectly, improbably, wrong.

The actual procedure will move one bundle at a time into a cask with other bundles. The cask is then shielded and drained, and moved to ground level to a longer term storage facility. If a bundle is dropped, it may break, and there may be pellets scattered on the pool floor. No radioactivity will be released. When all the intact bundles are removed, bundles that presented a problem and any pellets will need to be separately moved. The entire process does not risk fission, nor will there be radiation release.

This step is occurring long before removal to dry cask storage to allow workers to ascertain if any interesting changes occurred in the earthquake, tsunami, or/and soaking with salt water.

Gundersen has a long career of unsupported assertions. There was the time he found very radioactive soil in Tokyo:

Arnie Gundersen, chief engineer with Burlington-based Fairewinds Associates, says he traveled to Tokyo recently, took soil samples from parks, playgrounds and rooftop gardens around the city and brought them back to be tested in a U.S. lab.

He says they showed levels of radioactivity would qualify them as nuclear waste in the U.S.

Nuclear Energy Institute, a U.S. industry lobby, asked Gundersen to share the lab results, perhaps let an independent lab check the results. No luck, Gundersen refused.

Gundersen, when interviewed in June 2011, had apparently forgotten Chernobyl, and the Bhopal disaster (which killed immediately and long term more than Chernobyl), and etc:

Fukushima is the biggest industrial catastrophe in the history of mankind,” Arnold Gundersen, a former nuclear industry senior vice president, told Al Jazeera….

According to Gundersen, the exposed reactors and fuel cores are continuing to release microns of caesium, strontium, and plutonium isotopes. These are referred to as “hot particles”.

“We are discovering hot particles everywhere in Japan, even in Tokyo,” he said. “Scientists are finding these everywhere. Over the last 90 days these hot particles have continued to fall and are being deposited in high concentrations. A lot of people are picking these up in car engine air filters.”

Radioactive air filters from cars in Fukushima prefecture and Tokyo are now common, and Gundersen says his sources are finding radioactive air filters in the greater Seattle area of the US as well.

The hot particles on them can eventually lead to cancer.

“These get stuck in your lungs or GI tract, and they are a constant irritant,” he explained, “One cigarette doesn’t get you, but over time they do. These [hot particles] can cause cancer, but you can’t measure them with a Geiger counter. Clearly people in Fukushima prefecture have breathed in a large amount of these particles. Clearly the upper West Coast of the U.S. has people being affected. That area got hit pretty heavy in April.”

Plutonium was not released. A micron is one millionth of a meter, I’m not sure what a micron of cesium is. No evidence has been found that hot particles that can’t be detected with a Geiger counter are poisoning car air filters around Japan and in Seattle.

Gundersen got a master’s degree in nuclear engineering at the time the Fukushima plant was being built, and is now chief (and only) engineer at Fairewinds, an anti-nuclear group. While he began work in nuclear > 4 decades ago, it is not correct to say that he actually has 4 decades experience.

Harvey Wasserman says this is the most danger the world has been in since the Cuban Missile Crisis

The Truthout article links near the top to an online article in which Harvey Wasserman makes a lot of assertions, most of which don’t make sense.

• Steam indicates fission may be occurring underground.
• Irradiated water could leak from tanks if there is a really large earthquake, without quantifying either the size of the earthquake, the size of the radioactivity (most of the water in the tanks is very low level radioactive), etc. (More on this in part 5.)
• Evidence indicates increased thyroid cancer among children, despite the UN finding no such evidence after extensive testing.
• The GE-designed pool is 100′ up. A lot of this article is anti-Big Biz, so at some point, I began to infer that if GE or another large company designed it, I was supposed to believe that there must be a design flaw.

This is not Wasserman’s first set of predictions. Now the danger is ahead of us, but in 2011 he posited that it may have already happened:

At least one spent fuel pool—in Unit Four—may have been entirely exposed to air and caught fire. Reactor fuel cladding is made with a zirconium alloy that ignites when uncovered, emitting very large quantities of radiation. The high level radioactive waste pool in Unit Four may no longer be burning, though it may still be general.

I’m not sure what the last clause means.

He quotes Ken Buessler saying,

When it comes to the oceans, says Ken Buesseler, a chemical oceonographer at the Woods Hole Oceanographic Institution, “the impact of Fukushima exceeds Chernobyl.”

(typos in original) It is true that Chernobyl was far from any ocean, but apparently Buesseler didn’t and doesn’t think that the effects of Fukushima merit a Cuban Missile Crisis headline.

Fukushima’s owner, the Tokyo Electric Power Company, has confirmed that fuel at Unit One melted BEFORE the arrival of the March 11 tsunami.

NOT.

In 2012, Wasserman corrects the actual facts about Three Mile Island with untruths:

“Nobody died” at Three Mile Island until epidemiological evidence showed otherwise. (Disclosure: In 1980 I interviewed the dying and bereaved in central Pennsylvania, leading to the 1982 publication of Killing Our Own).

A link is provided to his book, which we can buy to learn more.

Harvey Wasserman is senior advisor and website editor for nukefree.org, which was created by 3 musicians (and I love all of them!) to fight nuclear power. Wikipedia says Wasserman has no degrees in science, and that he coined the phrase, “No nukes”.

Summary

All of those warning of the dangers of the fuel rods at Fukushima Dai-ichi—Alvarez, Gundersen, Wasserman, and the others cited in the Truthout article and other articles I’ve seen—say things that aren’t true. Readers, call them on it!

Part 1 Bottom line numbers
Part 2 The state of the evacuation, food and fish
Part 3 The plume and fish come to North America
Part 5 The current state of F-D cleanup

Fukushima update—the plume and fish come to North America, part 3

September 30th, 2013

Some of the oddest accusations about the Fukushima accident imply that it has affected or will affect health of Americans.

Tsunami debris

Marine debris from the tsunami is expected to hit Hawaii this winter, and the US mainland in 2014. This is unrelated to the nuclear accident, but will it have health effects? Harm other species?
marine debris
Marine debris, see NOAA for more information

The plume

A number of unrelated figures, such as this NOAA picture of tsunami height on March 11, 2011, have been alleged to represent a radioactive plume moving east across the Pacific:

Tsunami height becomes radiation?
Snopes says nope, NOAA’s picture of tsunami height is not also a picture of the amount of radioactivity.

The current expectation is that the plume will reach Hawaii in the first half of 2014, and the West Coast of the US some years later. Estimates of Hawaiian radioactivity is 10 – 30 becquerel/cubic meter, but it will be more dilute when it hits the mainland, some 10 – 22 Bq/m3, according to Multi-decadal projections of surface and interior pathways of the Fukushima Cesium-137 radioactive plume. This radioactivity adds to >12,000 Bq/m3 in the ocean water itself (the great majority of this is potassium-40, also a large part of natural radioactivity in our body).

Lots of stuff travels to other hemispheres through the ocean and air—California gets enough Chinese coal pollution to challenge the state’s air pollution standards. (More interesting and less discussed, but why?)

Radioactive fish are traveling as well

The US, like a number of countries, requires tests of food if there is reason to think that food standards might not be met. So far as I know, the US isn’t bothering to test Pacific Ocean fish for radioactivity.

A partial list of odd assertions:

Cecile Pineda, a novelist, has stayed in that genre with her recent discussions of Fukushima. She spoke recently in the SF East Bay on fish purportedly showing signs of radiation disease washing up in Vancouver, Oregon, and LA. Yet as we see below, the major radioactivity in almost all fish traveling to North America is natural.
• The Daily Mail offers radioactivity as an explanation for malnourished seal pups in CA. See the front page of The Daily Mail if you wonder about its general reliability.
• Bluefin tuna caught in CA last August are 10 x as radioactive as normal, according to a Huffington Post interpretation of a paper in the Proceedings of the National Academy of Sciences. NOT. Interestingly, the link the article provided gives different information: the fish, which were young and in Japan at the time of the accident were 5 x as radioactive as normal if you count just the cesium (5 becquerel rather than 1). This is in part because cesium washes out unless the fish keep ingesting it.

The actual facts are not frightening. According to Evaluation of radiation doses and associated risk from the Fukushima nuclear accident to marine biota and human consumers of seafood in the Proceedings of the National Academy of Sciences,

Abstract: Radioactive isotopes originating from the damaged Fukushima nuclear reactor in Japan following the earthquake and tsunami in March 2011 were found in resident marine animals and in migratory Pacific bluefin tuna (PBFT). Publication of this information resulted in a worldwide response that caused public anxiety and concern, although PBFT captured off California in August 2011 contained activity concentrations below those from naturally occurring radionuclides.

To link the radioactivity to possible health impairments, we calculated doses, attributable to the Fukushima-derived and the naturally occurring radionuclides, to both the marine biota and human fish consumers. We showed that doses in all cases were dominated by the naturally occurring alpha-emitter 210Po and that Fukushima-derived doses were three to four orders of magnitude below 210Po-derived doses….

Their report begins,

Recent reports describing the presence of radionuclides released from the damaged Fukushima Daiichi nuclear power plant in Pacific biota have aroused worldwide attention and concern. For example, the discovery of 134Cs and 137Cs in Pacific bluefin tuna (Thunnus orientalis; PBFT) that migrated from Japan to California waters was covered by >1,100 newspapers worldwide and numerous internet, television, and radio outlets. Such widespread coverage reflects the public’s concern and general fear of radiation. Concerns are particularly acute if the artificial radionuclides are in human food items…

The “three to four orders of magnitude” says that the added radioactivity from the Fukushima accident is, give or take, 1,000 – 10,000 times less important than natural radioactivity. The relative interest in bluefin tuna radioactivity over Chinese air pollution in North America appears to be explained in the opening paragraph.

Table 1 provides mean radioactivity decay rates for the following elements:

Bluefin tuna arriving in San Diego, August 2011
cesium (both Cs-134 and Cs-137), 10.3 becquerel/kg dry
potassium-40, 347 Bq/kg dry
polonium-210, 79 Bq/kg dry

Japan, April 2011
cesium, 155 Bq/kg dry
potassium-40, 347 Bq/kg dry
polonium-210, 79 Bq/kg dry

The polonium will have significantly more health effects per becquerel—polonium is an alpha emitter, stored differently in the body, etc.

In the same table, the authors assume that Americans get their entire average annual sea food consumption, 24.1 kg = 53 pounds/year, from bluefin tuna, and calculate health effects. They do the same for the Japanese, assuming 56.6 kg = 125 pounds consumption/year. It is not clear that the authors consider how long radioactive atoms remain in our body, since we excrete them along with other atoms; the numbers below may overstate the case as the authors assume a residence time as long as 50 years.

San Diego, August 2011
cesium, 0.9 µSv (microsievert, see Part 2 for more on units)
potassium-40, 12.7 µSv
polonium-210, 558 µSv

in Japan April 2011
cesium, 32.6 µSv
potassium-40, 29.7 µSv
polonium-210, 1,310 µSv

Radioactivity due to cesium in tuna, in Japanese waters and elsewhere, has declined dramatically since 2011.

Bottom line

The accident at Fukushima added an insignificant level of radioactivity to that already in seawater and fish, at least for those of us who are far away. As mentioned in Part 2, a small number of bottom feeders in the area immediately adjacent to the plant have levels of radioactivity which don’t meet international standards.

A good portion of the American Fukushima discussion I’m seeing asks, “How will Fukushima affect me?” The answer: if it is unhealthy for Americans, the effects in Japan would be more dramatic. Contrast this with Chinese air pollution, affecting CA air quality after killing many hundreds of thousands yearly in China.

Part 1 Bottom line numbers
Part 2 The state of the evacuation, food and fish
Part 4 The history of predictions on spent fuel rods
Part 5 The current state of F-D cleanup

Fukushima updates on evacuation, food, and fish, part 2

September 28th, 2013

What is happening with the Fukushima evacuation, and how the radioactivity in Fukushima compares to other places people visit and live. The cleanup, food and fish, and the cost of increased use of fossil fuels.

Many places in the world have high natural background radiation

According to World Nuclear Association,

Naturally occurring background radiation is the main source of exposure for most people, and provides some perspective on radiation exposure from nuclear energy. The average dose received by all of us from background radiation is around 2.4 mSv/yr, which can vary depending on the geology and altitude where people live – ranging between 1 and 10 mSv/yr, but can be more than 50 mSv/yr. The highest known level of background radiation affecting a substantial population is in Kerala and Madras states in India where some 140,000 people receive doses which average over 15 millisievert per year from gamma radiation, in addition to a similar dose from radon. Comparable levels occur in Brazil and Sudan, with average exposures up to about 40 mSv/yr to many people. (The highest level of natural background radiation recorded is on a Brazilian beach: 800 mSv/yr, but people don’t live there.)

Several places are known in Iran, India and Europe where natural background radiation gives an annual dose of more than 100 mSv to people and up to 260 mSv (at Ramsar in Iran, where some 200,000 people are exposed to more than 10 mSv/yr).

Units* are explained at the end of this post.

That list is far from complete; there are a number of other places with high background radioactivity:
Finland, population 5.4 million, almost 8 millisievert each year (mSv/year)
• parts of Norway over 10 mSv/year
Yangjiang, China population 2.6 million > 6 mSv/year
Denver 2.6 million, 11.8 mSv/year
Arkaroola, South Australia, 100 x more radioactive than anywhere else in Australia. The hot springs are hot because of radioactive decay!
• Guarapari, Brazil where the black sand on the beach comes in at 90 µSv/hr using the 800 mSv/year figure above, but higher recordings have been seen, up to 130 µSv/hr. People are permitted to sit where they will on the beach without wearing any special hazmat outfit.
• Radon was first discovered as a major portion of our exposure when Stanley Watras triggered the alarm at his local nuclear power plant. His basement was more than 800 µSv/hour.
• Etc… Cornwall … etc…southwest France…etc…
• Air travel increases our exposure to radioactivity, by about 4 – 7 µSv/hour, more for the Concorde NY to Paris route.

Numbers provided by different sources vary for a number of reasons. Some sites don’t include our own internal radioactivity, about 0.4 mSv/year. Some look at maximum, some look at maximum people actually live with, some average.

Japanese evacuation categories

Over 160,000 were evacuated in 2011. The Japanese government only allowed return to begin in 2012 where yearly dose would be less than 20 mSv/year the first year back, although decontamination would continue. Restrictions exist for areas not expected to drop below 20 mSv/year by March 2016, 5 years after the accident, and include about half the 20 km (12 mile) evacuation zone. As of now, all towns can be visited, although some visits are restricted, including Futaba, the town closest to the plant, where many houses were destroyed by the tsunami.

The Japanese government has 4 categories for evacuation:
—difficult-to-return zones, with evacuation expected to be at least 5 years from March 2012
—no-residence zones, where people will be able to return earlier
—zones preparing for the evacuation order to be lifted
—planned evacuation zone, “a high-risk zone to the northwest of the plant and outside the 20-kilometer radius that is yet to be reclassified into any of the three other categories.”


   Dose equivalent 11/2011  Dose equivalent 3/2013  Dose equivalent at 3/2013 level
   µSv/hr  µSv/hr  mSv/year
 Difficult to return  14.5  8.5  74
 No-residence  5.7  3.7  32
 Evacuation order to be lifted  2.0  1.1  9.6
 Planned evacuation zone  2.7  1.5  13


Table: Radioactivity decline over 17 months.

Of course, the level of radioactivity will continue to decline. This rate of radioactivity decrease is about the same as was seen in the areas around Chernobyl, where cesium declined with a half life of 0.7 – 1.8 years; decline in the zones around the Fukushima plant was about 40% in 1.6 years. The areas around Chernobyl saw a rapid decrease for 4 – 6 years, so it would not be surprising if by January 2015, all rates had dropped by half, and by November 2016, all rates dropped by half again, even without special clean up work. The difficult to return zones would expect to see an average of 2.1 µSv/hr, or a temporary rate of 19 mSv/year, or less, by November 2016. Assuming the Japanese experience is the same as in the areas around Chernobyl, the rate should continue to decline rapidly between 2011 and 2015 – 2017.

To get some idea of radioactivity in the area northwest from the Fukushima-Daiichi plant, go to this map which is updated frequently (although we are unlikely to see any change day to day). Note that you can get more detailed information by placing your cursor over the sites; the most radioactive site at the end of September 2013 was 26 µSv/hr. The sensors are in place and sending information to the Japanese NRA (nuclear regulatory agency).

Note: nowhere on this map is as radioactive as a number of places where people travel freely, such as Guarapari, Brazil or Ramsar, Iran.

How is Japan doing on the cleanup?

In November 2011, a team from International Atomic Energy Agency thought that Japan deserved good grades for prompt attention to cleanup, and poor grades for setting reasonable priorities.

In practical terms this translates to focusing on the quickest dose reduction, without unwanted side effects like classifying millions of tonnes of very lightly contaminated topsoil as ‘radioactive waste’. It may be desirable to remove this soil from childrens’ playgrounds, for example, but some of the material may pose no realistic threat to health and could be recycled or used in construction work, said the IAEA team.

Another point of consideration is the handling of large open areas like forests. “The investment of time and effort in removing contamination beyond certain levels… where the additional exposure is relatively low, does not automatically lead to a reduction of doses for the public.” Japanese authorities have already noted that removing some contaminated leaf mold could have a greater harmful effect on some parts of the ecosystem.

The Japanese appear to be spending lots of money to bring the level of radioactivity well below 20 mSv/year, at best only partially following IAEA recommendations:

A further 100 municipalities in eight prefectures, where air dose rates are over 0.23 µSv per hour (equivalent to over 1 mSv per year) are classed as Intensive Decontamination Areas, where decontamination is being implemented by each municipality with funding and technical support from the national government.

Work has been completed to target levels in one municipality in the Special Decontamination Areas: Tamura, where decontamination of living areas, farmland, forest and roads was declared to be 100% complete in June 2013. Over a period of just under a year, workers spent a total of 120,000 man days decontaminating nearly 230,000 square metres of buildings including 121 homes, 96 km of roads, 1.2 million square metres of farmland and nearly 2 million square metres of forests using a variety of techniques including pressure washing and topsoil removal.

Meanwhile, other municipalities hope to receive the classification and the money that goes with it.

What about the food?

Japan allows less radioactivity in the food and water than many other parts of the world. For example, Japan before the accident set their water safety level at 1/5 the level of the European standard, and then lowered it further. Their assumptions of the health effect of various decay rates for food and water appear to me to assume that radioactivity from food and water comes in, but never leaves.

The US standard is 1,200 Bq/L for water, and 1,250 Bq/kg (570 Bq/pound) for solid food.

The World Health Organization standard for infants is 1,600 Bq/L radioactive iodine, and 1,800 Bq/L radioactive cesium (table 6 here).

Similarly the Japanese food standard for radioactivity began lower than that in other countries, and the Japanese lowered it even further. This has repercussions for Japanese farmers—more than a year ago, 30 out of almost 5,000 farms in the relatively contaminated areas farmed rice too radioactive to sell, although it would be safe according to standards elsewhere, but by imposing even more rigorous standards, 300 farms would encounter problems selling their rice.

• The new standards for Japan are 10 Bq/L water, 50 Bq/L milk (because the Japanese drink less milk), and 100 Bq/kg (new standards) for solid foods.

“Scientists say [the much higher international] limits are far below levels of contamination where they can see any evidence of an effect on health.”

There are a number of foods naturally more radioactive than the new Japanese standard, for example Brazil nuts can be as much as 440 Bq/kg. Even though Bq is a decay rate and not a health effect, the health effect from cesium decay and other radioactive atoms normally in food, like potassium, is the same.

From a Woods Hole article on seafood,

In one study by the consumer group Coop-Fukushima, [Kazuo Sakai, a radiation biophysicist with Japan’s National Institute of Radiological Sciences,] reported, 100 Fukushima households prepared an extra portion of their meals to be analyzed for radioactivity. The results showed measurable amounts of cesium in only three households, and in all cases showed that naturally occurring radiation, in the form of potassium-40, was far more prevalent.

The article contains a statement by a Japanese pediatric oncologist, recommending massive removal of top soil, etc so that levels of radioactivity were below natural background in the US, with the idea of reassuring Japanese citizens.

Ironically, some suggested, the Japanese government’s decision to lower acceptable radiation limits in fish may have actually heightened consumer fears instead of dampening them. Deborah Oughton, an environmental chemist and ethicist at the Norwegian University of Life Sciences, related that the Norwegian government, when faced with high radioisotope concentrations in reindeer meat as a result of Chernobyl, decided to raise acceptable limits from 600 to 6,000 becquerels per kilogram. The move was made, she explained, to protect the livelihood of the minority Sami population that depends on reindeer herding for its survival.

Weighed into the judgment, she added, was the issue of dose: The hazard involves not only how high the levels are in meat, but how much you eat—and Norwegians rarely eat reindeer meat more than once or twice a year. The decision had no impact on sales of reindeer meat.

The larger point, Oughton said, “is that public acceptance with regard to these issues comes down to more than becquerels and sieverts. It is a very complex issue.” And nowhere more complex than in Japan. Alexis Dudden of the University of Connecticut offered a historian’s perspective when she suggested that “both at the local level and the national level, some discussion needs to take into consideration Japan’s particular history with radiation.”

What about the fish?

According to the KQED interview with Matt Charette (minute 14:15 – 15), 20% of fish obtained near Fukushima prefecture come in at above the Japanese standards.

The Japanese allow 100 Bq/kg fish, <1/10 as much radioactivity as Americans (and everyone else). Within 20 km (12 miles) of the plant, they are finding that 40% of the bottom dwelling fish off Fukushima don’t meet the Japanese standards. While most of these will meet international standards, two greenling caught in August 2012 came in at 25,000 Bq/kg (subscription needed).

All fish, particularly the bottom dwelling fish are tested from this region and those that flunk are not sold in Japan or exported.

According to Fukushima-derived radionuclides in the ocean and biota off Japan, in Proceedings of the National Academy of Sciences, the level of cesium in fish would need to be about 300 – 12,000 Bq/kg to become as important as the radioactive polonium-210 found in various fish species. Potassium-40 is also an important source of radioactivity in ocean fish. Only a small portion of fish tested in 2011 had become half as much more radioactive than ocean fish are naturally. Eating 200 gram piece of fish (typical restaurant portion) at >200 Bq cesium/kg is equivalent to eating an uncontaminated 200 g banana.

Fishing has begun again off the coast:

Out of 100 fish and seafood products tested, 95 were clear of radioactive substances and the remaining five contained less than one-10th of the government’s limit of 100 becquerels for food products, it added.

Cost of switching to fossil fuels

Japan is now operating full time old power plants meant to operate while the nuclear plants are down, and it is challenging to keep them on. Much of the $40 billion annual increase in the cost of fossil fuels (to $85 billion) since March 2011 is due to replacing nuclear power with fossil fuels.

Japan’s greenhouse gas emissions are up about 4%, even with reduced electricity available, 10% for electricity.

* Units: One microsievert (µSv) = 1 millionth of a sievert. One millisievert (mSv) = 1 thousandth of a sievert. The model predicts approximately for the general population that 10 man Sieverts = 1 cancer, 20 man Sieverts = 1 death. Most major organizations assume a lower health effect for low doses (below 100 mSv or below 10 mSv) or for a low dose rate.

Sieverts include decay rate, type of decay (some types of decay do more damage), and tissue type—they are a health effect.

There are 8,760 hours/year. Multiply values for µSv/hour by 8.76 to get mSv/year. Since radioactivity is disappearing rapidly from the area around Fukushima-Daiichi, round down a bunch to get your actual exposure over the next year if you move back today.

Part 1 Bottom line numbers
Part 3 The plume and fish come to North America
Part 4 The history of predictions on spent fuel rods
Part 5 The current state of F-D cleanup

Fukushima update, bottom line numbers—part 1

September 20th, 2013

Updated to correct the number of evacuation related deaths and to add the standards for evacuation.

There have been a recent upsurge of odd assertions on the nuclear accident in Fukushima, along with reasonable questions on what-the-heck is happening there. The short answer is not much. The long answer will be spread over a few posts.

You can begin with my articles in Friends Journal, with appendices. Note there are two articles, one coming after the letters to the editor.

Early version bottom line numbers from those articles

Some of these numbers are updated below.

• 124 workers at the Fukushima-Daiichi plant received a dose of more than 100 millisieverts (mSv). This unit takes into account actual decay rate, the type of decay because some decays do more damage, and tissue type to give a health effect. The net result is that one worker might die, with a total cumulative exposure exceeding 13 man Sieverts (more or less, 10 man Sieverts = 1 cancer, 20 man Sieverts = 1 death, fewer cancers and deaths predicted in an older male cohort). Additionally, one worker died from a heart attack while wearing the hazmat outfit in the heat.

• The actual exposure to the public was relatively small, in part because people stayed inside and in part because of evacuation.

• The official level of public safety is 20 mSv/year for the public the first year back, or else 1 mSv/year (I find myself confused, more in part 2). A few? several? tens of thousands lived in areas where first year dose would be 20 mSv or higher. The health effect of 20 mSv is less than the health risk to each of the 36 million living in Tokyo from just the particulates in air pollution.

• Background radioactivity falls rapidly with the decay of the radioactive iodine (half life 8 days). That is the most dangerous, as it targets a very small gland. Most of the rest of the radioactivity was from cesium, half of which decays at the rate of 30% per year, and half at the rate of 2%/year. Cesium is removed from the environment by natural processes, such as rain, which makes the ecological half life much smaller—the areas around Chernobyl saw half of cesium disappear from the environment every 0.7 to 1.8 years for the first 4 – 6 years, depending on location. Physical half life would predict 58% left at the end of 4 years, and 50% after 6 years; yet only 3 – 20% of the Chernobyl cesium remained after 4 – 6 years.

Updates

• The number of workers exposed to more than 100 mSv is up to 146, and the expected number of deaths is still not quite 1.

• The most exposed member of the public will get an exposure of < 10 mSv over their lifetime and this probably overstates the case. If the local residents had moved to Denver, their dose would increase by 8 mSv/year; if to other areas of the world, their dose could increase by as much as 50 – 250 mSv/year.

• The health effects of living in Tokyo go beyond particulates: ground level ozone and nitrogen oxides makes Tokyo even unhealthier, by a lot. For example, Tokyo is smoggier than LA, and residents of California’s Central Valley and Los Angeles metropolitan area have a 25 – 30% higher chance of dying of respiratory disease, compared to the SF area.

• While the Japanese government protected people from exposure to radioactivity, some policies opened them up to perhaps greater dangers.

According to World Nuclear Association,

As of October 2012, over 1000 disaster-related deaths that were not due to radiation-induced damage or to the earthquake or to the tsunami had been identified by the Reconstruction Agency, based on data for areas evacuated for no other reason than the nuclear accident. About 90% of deaths were for persons above 66 years of age. Of these, about 70% occurred within the first three months of the evacuations. (A similar number of deaths occurred among evacuees from tsunami- and earthquake-affected prefectures. These figures are additional to the 19,000 that died in the actual tsunami.)

The premature deaths were mainly related to the following: (1) somatic effects and spiritual fatigue brought on by having to reside in shelters; (2) Transfer trauma – the mental or physical burden of the forced move from their homes for fragile individuals; and (3) delays in obtaining needed medical support because of the enormous destruction caused by the earthquake and tsunami. However, the radiation levels in most of the evacuated areas were not greater than the natural radiation levels in high background areas elsewhere in the world where no adverse health effect is evident, so maintaining the evacuation beyond a precautionary few days was evidently the main disaster in relation to human fatalities.

The international recommendation for evacuation is 700 mSv/year, with IAEA saying that 1 month to evacuate at 880 mSv/yr is OK (with people staying indoors, presumably, so actual exposure is less). Presumably this balances the dangers of evacuation with the dangers of evacuation. The amount of radioactivity was MUCH higher in the first month, due to the radioactive iodine, but only the small towns in the immediate vicinity of the plant itself got anywhere near this kind of dose.

As with Chernobyl, the health effects are expected to be dominated by anxiety about the radioactivity, and the behaviors that accompany this anxiety. The effect of anxiety can be seen in the statistics: the death rate from Chernobyl is much lower among those who refused to evacuate and those who evacuated and then returned, higher in those who evacuated but did not return.

I assume that there has been or will be some discussion of evacuations—both what short-term exposure is acceptable, and how rapidly to evacuate. A quick evacuation may not be needed, although restrictions not used after Chernobyl will reduce exposure dramatically (stay indoors, and don’t eat unwashed apples off the tree or drink milk from the local cows).

• The Japanese government appears to have exacerbated the worries of its people, both by sounding like they don’t know what is what, and in overdoing the warnings. Just two examples:

——Bottled water was provided for Tokyo parents when radioactivity for a very short time reached 210 becquerel/liter, because this would exceed the Japanese limit for exposure if the babies drank the water for a year. The European standard is 1,000 Bq/liter over a year, with no provision for worry if the level is up 5% for 2 days.

——In Residents brave radiation fears for two golden hours in ghost town, an article on Tomura residents visiting their homes shortly after F-D, residents are shown in outfits to protect them from radioactivity.

Hazardous homecoming: Residents dressed to protect against radioactive contamination wait at a local gymnasium in Fukushima
Article caption—Hazardous homecoming: Residents dressed to protect against radioactive contamination wait at a local gymnasium in Fukushima

The highest measured dose is 1.3 microsieverts/hour. Contrast this with the 9.5 µSv/hr exposure when flying from New York City to Paris, a flight longer than 2 hours.

• There is general agreement that the Japanese government was trained at the anti-Tylenol school of disaster communication.

A lot of top nuclear people and organizations are saying so, more on that in post 4.

Part 2 The state of the evacuation, food and fish
Part 3 The plume and fish come to North America
Part 4 The history of predictions on spent fuel rods
Part 5 The current state of F-D cleanup

Another conflict resolution exercise—solutions to climate change, part 3

September 2nd, 2013

“We’re talking about this civilly!” was my favorite line of the workshop. It came during the exercise Gradients of Agreement in the workshop Friends Process: Responding to Climate Change, where we produced a minute on climate change. In this exercise, we first identified our position on wonk recommendations (from major reports from the communities that begin with peer review) on solutions to climate change, and then we explained why we were standing where we were.

After each statement (see below), we stretched ourselves out on a line from 1 to 8. Then those at the extremes explained their reasoning; others did as well. At any point we could move around, or stay put. Those who moved shared why, and others might share why they stayed put, and what could entice them to move. Moving around physically made the exercise different somehow—it let us feel that our commitment to a position could be temporary. And at any point we might be asked to explain our position, something we do too rarely in the safety of like-minded others.

The gradient line:
1 whole-hearted endorsement
2 agreement with a minor point of contention
3 support with reservations
4 abstain
5 more discussion needed
6 don’t like but will support
7 serious disagreement
8 veto

The closest we got to an agreement on any statement below was clustering between 1 and 4; sometimes we stretched from 1 to 8.

Place yourself on the line for each of these six statements. Can you explain why you are there? All of these are mainstream predictions, although A, C, and F come from the most aggressive push I’ve seen for alternatives to fossil fuels from a mainstream source, International Energy Agency’s Redrawing the Energy Map..

A. I support International Energy Agency’s recommendation to add 4,000 terawatt-hours wind yearly (including replacement of essentially all current windmills), and 1200 TWh solar yearly in panels, including replacements, by 2035. The wind, if on land, would be spread over 200,000 sq miles (about 1.5 Californias), with more than 7,000 sq miles of land actually covered by roads and windmill. Much of solar would be solar parks spread over 10,000 sq miles (an area the size of Maryland). [Note: the comparison is to US states, but IEA's recommendations, here and below, are for the world.]

B. I support adding a cost to greenhouse gas emissions. The current US estimate of social cost of GHG would add $36/ton, about 3.6 cent/kWh for coal, half that for natural gas, and 32 cents/gallon for gasoline, and a lot to the cost of an airplane flight (more than just the tax on fuel). This would likely be 3 – 4 times larger, or even more, by 2050.

C. I support International Energy Agency’s recommendation to add 3,500 terawatt-hours in new nuclear yearly, including replacements, by 2035. This is about 400 reactors of the type being built today in South Carolina and Georgia, although the actual mix will probably include some that are smaller. Land use is less than 1% of wind.

D. I support hydraulic fracturing, fracking, techniques used to replace coal with natural gas in the US, China, Germany and elsewhere.

E. I support adding a cost to greenhouse gases and letting the market decide which solutions are the best able to reduce greenhouse gas emissions, rather than subsidizing wind and solar.

F. I support increasing International Energy Agency’s recommendation to increase the world’s supply of hydro by almost one half by 2035.

I was always in position 1 for each of these, although I assumed the fetal position for recommendation F—if there are more major solutions than are needed, hydro will be the first to be cut. All of these are mainstream wonk recommendations. For more, see Intergovernmental Panel on Climate Change Working Group 3, Mitigation (a new report is due out mid-2014). For a relatively easy to follow wonk blog, try Energy Economics Exchange.

Leave a comment with where you are on the line for any one of these statements, A – F, especially one you have changed your position on, and why? What might change your position again?

*******

Part 1 Quaker workshop minute on climate change
Part 2 Conflict resolution exercise—solutions to climate change in which we look at which sources we rely on, and why

Conflict resolution exercise—solutions to climate change, part 2

August 31st, 2013

Yesterday I discussed a statement produced in our July workshop, Friends Process: Responding to Climate Change. The emphasis was on conflict resolution solutions and Quaker processes that help—How do we begin talking about controversial social issues? How do we begin listening?

We focused on solutions to climate change—if you’re human, you probably object to at least one, and likely several, solutions the wonk reports (major reports out of the communities that begin with peer review) say are needed. As we said in our minute, it is important to spend more time in discernment of our values, and in finding ways to listen to scientists.

In one exercise, from Greg Craven’s What’s the Worst That Can Happen?, we explained which sources we trust and why. Consider who provides the information you trust: is it environmental groups? friends? science organizations? Heartland? The list of possibilities is long. Put them in order from most trustworthy to least. Now choose a couple of sources you really trust, say person A and organization B—explain what characteristics sources you find trustworthy have. How would I get to person A and organization B from your explanation of trustworthiness alone?

Is your description of sources the same for both the science of climate change and the solutions? If not, why?

You may find this very hard. My answer for which people and groups I trust are below, just to give an example.*

Leave comments: create a list of sources you trust on solutions to climate change, and explain your reasoning on the list, or one source, to the rest of us. Do you have different standards for the science of climate change, and solutions? (And while you’re there—have you ever learned from person A or organization B that you are wrong on an important issue?)

* Which sources do I trust?

I trust major reports that come out of the communities that begin with peer review. I don’t trust peer review by itself, as there are a lot of mistakes with the first article published. (Even with peer review, there are a lot of mistakes in good journals; some less good journals only seem to review that your check is good.)

After an idea has been introduced, the idea will be considered, seasoned, and challenged by others. Often the same experiment is done again by others, or the idea is tested with a very different approach. Government agencies, such as NOAA and NASA, often act as a higher layer of review. Even more review is done at the level of National Academy of Sciences and Intergovernmental Panel on Climate Change. If scientists disagree with conclusions at that level, they will often say so in Science. I trust these ideas, not as final Truth, but as the best we know at present—it is a fair bet that the conclusions will hold up better over time than ideas which haven’t undergone this kind of challenge. I trust this process because I see so much real challenge to new ideas; ideas have to prove themselves. I trust this process because ideas which are found to be schlock disappear from the scientific discussion.

In addition to high level reports, I trust a few scientists highly respected both within and outside their fields to accurately characterize scientific understanding, to include the nuances, as well as what is not known. They might be heads of national labs, or elected to prominent positions, such as president of American Association for the Advancement of Science. Being chosen often to co-lead prominent committees for groups such as National Academy of Sciences and President’s Committee of Advisers on Science and Technology is yet another sign of respect. Or sometimes I just hear that particular scientists are well-esteemed by their colleagues.

I trust lay people who get their information pretty much from the above sources.

I have a very different category which I call “listening on climate change”. If a non-scientist tells me they care about climate change, I want to know what solution they accept for climate change that they did not accept when they first began to worry. If they haven’t added any new solutions outside those favored by their tribe (for some this might be a steep cost on greenhouse gases, for others it’s nuclear power and fracking), my heart doesn’t hear them talking about climate change but about solutions they favor.

Do these wonk sources ever show me where I have been wrong? Yes, at much more than the nuance level: on the importance of climate change, for one, and on the safety and importance of nuclear power, genetically modified foods, and carbon capture and storage. And more. If I am never wrong on important issues, what are the odds that I am listening?

Part 1 Quaker workshop minute on climate change
Part 3 Another conflict resolution exercise—solutions to climate change, in which individuals take positions on different solutions, and explain to the others.

Quaker workshop minute on climate change—Part 1

August 30th, 2013

Normally workshops at the Friends General Conference Gathering (FGC) have a lot of time for worship (sitting in silence). When Gretchen suggested that we use that time for Business Meeting, my first thought was, “Eeek! I need silence, not so crazy on substituting business for centering.” We were planning our workshop, Friends Process: Responding to Climate Change, and FGC is hectic enough without adding business. But I said yes, confident that if it didn’t work, Gretchen would figure that out and return to Plan A (quiet!)

To my surprise, Business Meeting was spiritually centering. We began it with the same question each day, “Where are we now in the workshop?” These minutes of exercise grew into a formal minute of our time together, a statement about the process we went through and where we ended up. For the full minute, go here.

Lots of religious people produce statements about climate change. How is ours different?

• We don’t mention God. It’s not needed; none doubted that God is telling us, “Do something!

• We do mention Intergovernmental Panel on Climate Change. Twice. Because we trust IPCC to explain the science—what is causing climate change, and what are the impacts.

• A big part of what we looked at was the challenge we felt in choosing which something to do. While all of us trust wonk reports on climate change, the majority is uncomfortable with wonk recommendations for solutions—some see those looking at climate change as independent scientists while those looking at solutions are tools of industry. (Wonk here refers to those producing major reports out of the communities that begin with peer review.) Many of us were suspicious that government regulations and oversight wouldn’t work as well as wonk communities hope.

• We admit that we often avoid facing the climate problem “squarely” because “the truth is overwhelming.” And we often ignore “costs—economic, environmental, and human—…in solutions we personally favor.” (As a result, we become part of the problem.)

• We emphasize the need for solutions at high levels—international, national, and regional. This aligns well with wonk thinking as to where solutions are found, that individual behavior change will not be important to the solution. (Note: I personally feel there are many good reasons to change my own behavior. Eg, if I feel climate change is important, then it makes sense to live my life as if it were important. I learn much about obstacles to behavior change. People often change their behavior first, and having changed, are willing to acknowledge the problem that goes with that change.)

• We say that it is important to speak Truth to ourselves, “leaning into conflicts” rather than avoiding them. It is important for Quakers, and Quaker organizations, to move more in alignment with what scientists say, and with our values. Not all Quaker organizations are there now, and so we list all those involved with climate lobbying, so that we can query to what extent each makes an effort to align their recommendations with wonk information and Quaker values. Yes, we know that individually we are not there now either. In our time together, we saw ourselves shifting, and knew we would shift more as the discussion continues.

So far as I know, this is the first minute approved by Friends that stresses the individual, corporate, and organizational importance of addressing the incomplete overlap between the solutions we favor and those advocated by wonks, and between the solutions we favor and Quaker values.

• We find the Business Meeting methods used in our workshop not only effective tools for addressing the conflict within, but “personally nourishing”.

Leave a comment
How well do your, or your group’s, solutions to climate change overlap with wonk solutions? What values do your favorite solutions reflect?

Many of the details of our experience were not included in the minute, eg, what processes helped us? In my next two posts, I will give examples. Every group is different, but they worked for our group at this time.

*******

Clarification
Of course the minute was written by committee, but I was responsible for the part on 2°C/4°C, and one reader said, “??????” So to clarify:

• The increase is compared to when? In the minute, all temperatures are compared to pre-industrial. IPCC’s 2007 report compared temperatures to the average from 1980 – 1999. Add about 0.6°C to their numbers for temperature increases compared to pre-industrial. Media accounts are providing results from the draft IPCC report coming out in September, and their estimates compare to? They don’t tell us.

• What are major organizations predicting?

2°C
It is technically possible to keep temperature increase below 2°C by the end of the century, according to International Energy Agency (IEA). However, in their 2008 Energy Technology Perspectives, it was considered hard—we would need “unprecedented levels of cooperation”, and “the global energy economy will need to be transformed”. Now IEA says the task is “technically feasible, though extremely challenging”, a phrase meaning “much harder than in 2008”. No wonder we find science publications hard to read.

World Bank says we can reach 2°C within “20 to 30 years”. It’s been a long time since I have heard anyone in science besides IEA talk about keeping temperature increase this century below 2°C.

For those wanting to read more about why scientists picked 2°C as the temperature increase to avoid, see Assessing dangerous climate change through an update of the Intergovernmental Panel on Climate Change (IPCC) ‘‘reasons for concern’’.

4°C
A 4°C increase this century, possibly even next century, is necessarily a way station on the way to higher temperatures; if we are adding heat that fast, we are not at the top yet. Mainstream predictions of 4°C begin as early as 2060.

Many point to 4°C as the point at which human adaptation may not be possible. World Bank says in Turn Down the Heat: Why a 4°C Warmer World Must be Avoided, “With pressures increasing as warming progresses toward 4°C and combining with nonclimate–related social, economic, and population stresses, the risk of crossing critical social system thresholds will grow. At such thresholds existing institutions that would have supported adaptation actions would likely become much less effective or even collapse.”

Predictions for the end of the century
I have heard and read very few predictions for the end of the century below 3.5-4°C, although Robert Watson, who used to lead IPCC, and later Millennium Ecosystem Assessment, does talk about 3 – 5°C by the end of this century, and even says we have a decent chance of staying below 3°C. In the same talk, he said the same prediction is 10% species diversity loss for every °C increase over preindustrial. [Note: some of this is commitment to extinction—species loss is unlikely to be 10% at the time temperature increase reaches 1°C.]

Most other estimates are higher. World Bank says we are on track to reach 4°C “even if countries fulfill current emissions-reduction pledges.”

International Energy Agency says in Redrawing the Energy Climate Map, which it produced to show us short-term policies which are needed to keep the 2°C option open, “Policies that have been implemented, or are now being pursued, suggest that the long-term average temperature increase is more likely to be between 3.6 °C and 5.3 °C (compared with pre-industrial levels), with most of the increase occurring this century. ”

IPCC’s 2007 report gave a best estimate of 4.6°C over pre-industrial by 2090-2099, with a range of 3° – 7°, for the fossil intensive scenario. Our current emissions trajectory is near the top of IPCC projections.

*******

Part 2 Conflict resolution exercise—solutions to climate change in which we look at which sources we rely on, and why.
Part 3 Another conflict resolution exercise—solutions to climate change, in which individuals take positions on different solutions, and explain to the others.

Earth is getting warmer

April 1st, 2013

We knew that, but how fast? And why haven’t we had a hottest year since 2010?

How much has temperature changed?

NASA temperature

NASA has a number of graphs for the US and the world. This one shows more of a temperature increase in the northern hemisphere, where there is more land.

This trend is somewhat easier to see with separate graphs for El Nino, La Nina, and ENSO neutral years.
Temperature graphs

Credit: Skeptical Science

The temperature increase at the surface has slowed down some. Earth has been warming at less than 0.2°C/decade:

The three major surface temperature data sets (NCDC, GISS, and HadCRU) all show global temperatures have warmed by 0.16 – 0.17°C (0.28 – 0.30°F) per decade since satellite measurements began in 1979.

Surface warming is currently below that rate; over the last decade, temperature increase at the surface has only been 0.081 ± 0.13°C.

The Intergovernmental Panel on Climate Change prediction is that Earth’s surface continues to warm by 0.2°C/decade, to one sig fig, for the next 2 decades, but the range is huge, from slightly negative to more than 0.3°C/decade, depending on where heat is stored.

OK, temperature increase is between a negative amount and 0.3°C this last decade, but why is it at the lower end?

Changes in natural forcings

The sun goes through an 11-year cycle, but there are variations from cycle to cycle. Comparing to other years when solar irradiance is at a maximum, Earth receives about 0.1 watt/square meter less sunlight.

solar forcing
credit

This may not sound like much decrease from 240 W/m2 absorbed normally, except that the change since 1750 has only been 1.6 W/m2.

Then there are volcanoes. Jeff Masters points to

a study published in March 2013 in Geophysical Research Letters found that dust in the stratosphere has increased by 4 – 10% since 2000 due to volcanic eruptions, keeping the level of global warming up to 25% lower than might be expected.

This result was surprising. Previously, it was thought that Pinatubo-sized eruptions could release enough sulfur dioxide to affect climate in the short term, but not small to moderate volcanoes. While the increase in Asian pollution is also cooling Earth, the effect of small to moderate volcanoes has been more important.

Ryan Neely, an atmospheric scientist at National Center for Atmospheric Research (NCAR)

cautions that, while the new study shows the importance of volcanoes on a decadal level, there is a need to learn more about their effects on year-to-year climate variability as well. “Though we show that volcanoes had the most impact in this instance, this has not and may not always be true,” he says.

So are net heat flow and the rate of warming decreasing?

The energy budget of Earth refers to the flow of heat in, from the sun, and out. Heat is reflected, and heat flows out of any hot body.

According to Skeptical Science,

This energy imbalance was very small 40 years ago but has steadily increased to around 0.9 W/m2 over the 2000 to 2005 period, as observed by satellites. Preliminary satellite data indicates the energy imbalance has continued to increase from 2006 to 2008. The net result is that the planet is continuously accumulating heat.

Note that because the change in forcings is 1.6 W/m2, that we are out of balance by 0.9 W/m2 indicates we will continue to warm for some time, assuming that atmospheric levels of greenhouse gases remain constant.

Kevin E. Trenberth, head of the Climate Analysis Section at NCAR, and a lead author for the 2001 and 2007 IPCC Scientific Assessment of Climate Change (Working Group 1), wondered,

with this ever increasing heat, why doesn’t surface temperature continuously rise? The standard answer is “natural variability”. But such a general answer doesn’t explain the actual physical processes involved. If the planet is accumulating heat, the energy must go somewhere. Is it going into melting ice? Is it being sequestered deep in the ocean? Did the 2008 La Niña rearrange the configuration of ocean heat? Is it all of the above?

Now we know that much of the heat is being stored in the ocean at depths below 700 meters (2300 ft):

The preponderance of La Niña events in recent years has caused a large amount of heat from global warming to be transferred to the deep oceans, according to a journal article published earlier this week by Balmaseda et al., “Distinctive climate signals in reanalysis of global ocean heat content”.

Is that good or bad?

The next big El Niño event will be able to liberate some of this stored heat back to the surface, but much of the new deep ocean heat will stay down there for hundreds of years. As far as civilization is concerned, that is a good thing, though the extra heat energy does make ocean waters expand, raising sea levels.

Can we stop it?

In a perspective piece in the March 28, 2013 Science (subscription required), H. Damon Matthews and Susan Solomon (lead author in Working Group 1 of 2007 IPCC report), Irreversible Does Not Mean Unavoidable, say there is confusion:

irreversibility of past changes does not mean that further warming is unavoidable.

The climate responds to increases in atmospheric CO2 concentrations by warming, but this warming is slowed by the long time scale of heat storage in the ocean, which represents the physical climate inertia. There would indeed be unrealized warming associated with current CO2 concentrations, but only if they were held fixed at current levels. If emissions decrease enough, the CO2 levels in the atmosphere can also decrease. This potential for atmospheric CO2 to decrease over time results from inertia in the carbon cycle associated with the slow uptake of anthropogenic CO2 by the ocean. This carbon cycle inertia affects temperature in the opposite direction as the physical climate inertia, and is of approximately the same magnitude.

Because of these equal and opposing effects of physical climate inertia and carbon cycle inertia, there is almost no delayed warming from past CO2 emissions. If emissions were to cease abruptly, global average temperatures would remain roughly constant for many centuries, but they would not increase very much, if at all. Similarly, if emissions were to decrease, temperatures would increase less than they otherwise would have…”

So if we cease to add greenhouse gases to the atmosphere, atmospheric greenhouse gases concentrations would begin to decline, due to ocean, etc uptake. Earth would continue to warm because heat flow in is still larger than heat flow out, but some of this heat would be taken up by the oceans. Over time, Earth’s surface would heat very slowly if at all. It would not cool for a long while. Projections show temperatures continuing to go up because we will continue to emit GHG for some time.

More explanation here.

Summary

Satellite measurements find Earth is warming faster than in the 1990s. This is occurring even with a cooler sun and a temporary increase in sulfate particles in the atmosphere, reflecting more heat. Earth’s surface is warming more slowly because with La Ninas, heat is stored in the oceans at depths below 700 meters. (There is still some heat flow not yet accounted for.) When heat is stored in the deep oceans, sea level increases more rapidly, while our climate changes more slowly. Some of this heat will be returned to the atmosphere with the next El Nino.

Reducing, even zeroing out, GHG emissions is an excellent idea.

Rational thinking

February 22nd, 2013

“If you just thought for yourself, you’d agree with me and all my friends.” How often have you and I and the kitchen sink heard that?

Dan Kahan, one of the cultural cognition people, discusses the downsides of original thinking:

People need to (and do) accept as known by science much much much more than they could possibly understand through personal observation and study. They do this by integrating themselves into social networks—groups of people linked by cultural affinity—that reliably orient their members toward collective knowledge of consequence to their personal and collective well-being…

Polarization occurs only when risks or other facts that admit of scientific inquiry become entangled in antagonistic cultural meanings. In that situation, positions on these issues will come to be understood as markers of loyalty to opposing groups. The psychic pressure to protect their standing in groups that confer immense material and emotional benefits on them will then motivate individuals to persist in beliefs that signify their group commitments.

They’ll do that in part by dismissing as noncredible or otherwise rationalizing away evidence that threatens to drive a wedge between them and their peers. Indeed, the most scientifically literate and analytically adept members of these groups will do this with the greatest consistency and success.

Once factual issues come to bear antagonistic cultural meanings, it is perfectly rational for an individual to use his or her intelligence this way: being “wrong” on the science of a societal risk like climate change or nuclear power won’t affect the level of risk that person (or anyone else that person cares about): nothing that person does as consumer, voter, public-discussion participant, etc., will be consequential enough to matter. Being on the wrong side of the issue within his or her cultural group, in contrast, could spell disaster for that person in everyday life.

Some controversial social issues don’t carry this risk, and thinking for one’s self is OK:

The number of issues that have that character, though, is miniscule in comparison to the number that don’t. What side one is on on pasteurized milk, fluoridated water, high-power transmission lines, “mad cow disease,” use of microwave ovens, exposure to Freon gas from refrigerators, treatment of bacterial diseases with antibiotics, the inoculation of children against Hepatitis B, etc. et. etc., isn’t viewed as a a badge of group loyalty and commitment for the affinity groups most people belong to. Hence, there’s not meaningful amount of cultural polarization on these issues–at least in the US (meaning pathologies are local; in Europe there might be cultural dispute on some of these issues & not on some of the ones that divide people here).

Yet some of us do hold views on icon issues that differ from what others with our cultural affinity believe. What makes the difference? What motivates us to adopt different beliefs? What inoculates us against the reaction of the group? Or makes the importance of thinking for one’s self greater than group affinity?

Mark Lynas repudiates his position on GM crops

January 8th, 2013

In a talk (link includes transcript) to the Oxford Farming Conference, Lynas apologizes for and reputes his high-profile previous position on genetically modified food.

He explained the underlying reason for his shift: he began adding science to his climate change work, denigrating those relying on poor quality reports. At some point, he was challenged to begin reading high level reports on GM.

He doesn’t repute organic farming (his parents, who are organic farmers, approved his speech!), but he does point out that organic techniques lead to greater land use, and thus lower biodiversity.

His talk is worth listening to. What do you think?

The first big action on climate change

November 7th, 2012

A President and Congress have been elected. What to do first?

If your Representative or/and Senators are red, let them know that you support a big cost on greenhouse gas emissions to decrease use of foreign oil, decrease the deficit, and fund infrastructure (science research on energy, road and sewer repair).

If your Representative or/and Senators are blue, let them know that you support a big cost on greenhouse gas emissions that will be step 1 in addressing climate change, decrease use of foreign oil, and will also decrease the deficit and fund infrastructure (science research on energy, road and sewer repair).

Those who want to fight the tax vs cap and trade fight, are you more interested in fighting climate change, or something else?

Optimism, obedience, and other motivations to respond to climate change

August 13th, 2012

A conversation with a friend made it clear that our motivations to act on climate change differ. She acts from optimism: she changes her behavior, and talks to others about changing theirs, because she is optimistic that this will reduce greenhouse gas emissions a lot. She is looking for legislation that she and others can champion, hoping that within 5 years, the US will enact good legislation.

My joy comes not from optimism, but obedience. I believe climate change is important, so I want to live that understanding in my personal choices. I understand policy change to be more important than individual behavioral choices, so I study policy and advocate for better policies. I hear that we cannot agree even on the need for taxes for adequate road maintenance in today’s political environment, so I did the 40 days in the wilderness bit, reading social scientists for 3 years, and now focus on why we don’t listen, why the facts don’t seem to matter, eg, here. I am optimistic that there will be a tad less climate change if I am obedient, but I suppose that I would do the work even if there were little chance of this being true.

There are other motivations. Competition motivated dramatic energy reductions in some Kansas towns.

So does a desire to do what others are doing: when asked to find ways to reduce energy use because it saves money, saves the environment, is a good thing to do, or your neighbors are doing it, only the last gets good results. Six percent changed their behavior after a sign was posted in a gym asking people to turn off the shower while soaping up; this rises to half if there is an accomplice who turns off his, and 2/3 if there are two. Etc.

Actions to allay anxiety are often ineffective. Columbia’s Center for Research on Environmental Decisions describes the Single Action Bias:

In response to uncertain and risky situations, humans have a tendency to focus and simplify their decision making. Individuals responding to a threat are likely to rely on one action, even when it provides only incremental protection or risk reduction and may not be the most effective option. People often take no further action, presumably because the first one succeeded in reducing their feeling of worry or vulnerability. This phenomenon is called the single action bias.

What motivates you? and others?

Using insights from social science in presentations on climate change

July 23rd, 2012

I learned some years ago that climate change is not a popular subject for presentations. Groups with so-called climate skeptics find that the doubters and deniers, often a small minority, take over the discussion with arguments that shift over time, and many groups just don’t want to deal with them, or haven’t figured out how to do so. Groups without dissenters often feel that they already know climate change is important, although very few in the audience, no matter how much they accept climate science, have internalized how fast and profound the changes might be. I’ve met people my age and younger who expect not to see harsh changes in their lifetime and as a result lose any sense of urgency. Others may have an alarmist or fatalistic reaction that makes them want to give up and go party. These groups tend to prefer reducing the focus on harsh realities in favor of solutions, preferably those they already believe in or are attracted by.

Presentations focusing on solutions are hard because so often we are mainly looking for something to allay our anxiety. Unfortunately, most solutions are problematic in one way or another, which people aren’t all that glad to hear; all solutions are partial. Nuclear power is a topic many prefer, because it gives us a chance to take sides, saying YES! or NO THANKS!

For a number of years, due to these group preferences, my presentations on climate change have ostensibly been about nuclear power; the majority of the slides have been on nuclear, certainly. For climate change, I usually include only about 2 slides explaining why scientists and national security types are worried, plus 3 slides on changes we might see in the next half century, both changes we cannot prevent and future harm we can reduce. I then introduce nuclear power as a necessary and relatively safe partial solution to climate change, according to energy scientists and policy experts. The core of the presentation focuses on answering the concerns of those who oppose nuclear, and listing the advantages of nuclear, from low greenhouse gas emissions to low pollution to reliability.

In two presentations in Philadelphia in June, I added another component: what social scientists say about the reasons why many people reject scientific consensus, whether it’s in climate science or nuclear energy. As usual, the presentations were billed as being mostly about nuclear energy. Both groups, one large and one small, accept climate change for the most part, with the large group divided on nuclear power, while the small group was mostly anti-nuclear.

In the past, my presentations on nuclear energy in a warming world were generally appreciated by people open to scientific information, and a few who became open. But most, on all sides, wondered, why do I need this information, and what do I do with it? This is in part because the science in isolation is insufficient to inspire action, whether on climate change or particular solutions; it doesn’t tell me what my role is.

Interestingly, the addition of the social science perspective helped in both groups. I worried that people would feel insulted about generalizations that they, like everyone, see what they want to see, and that what we want to see is largely determined by what our group believes. Instead, most felt that it helped make sense of the confusion in the public discussion of controversial social issues.

Social scientists say (a few examples):
• We react from the gut, often in less than 1 second, on topics for which we have no background. Those who read more become even more polarized, as almost all of us find information that confirms our gut reaction. One person in the polarized group said, questioning information I had presented, “I’ve read that Fukushima was worse than Chernobyl.” It is easy to find sources that agree with our own preconceptions, and to believe, as one person wrote years ago in attacking the sources I rely on, “Any source that disagrees with me lacks integrity.”

• According to Jonathan Haidt and others, a primary evolutionary advantage of reasoning is to support opinions that show that we are good and trustworthy members of the group. A less common use of reasoning is to explore open-mindedly issues which require us to move into a state of tension, where we might be wrong, where there is nuance. Most avoid exploratory reasoning, especially where our group takes a stand, where exploration could challenge group expertise.

• It’s easier to attack people I don’t know than people like ourselves who use energy and products in the home. Who wants to alienate our friends?

• We all make a number of common critical thinking errors, which we can learn to do less often. Here are a few:
—failing to make direct comparisons: looking at nuclear waste rather than comparing the waste stream of various energy sources.
—question substitution: “How long does nuclear waste last,” rather than “Does anyone die?”
—the halo effect: if I like/dislike something or someone, I like/dislike all aspects. If I want renewables, I insist they are safe, sufficient, and cheap (or will be by a week from Tuesday).

• The media quote those who disagree with the best understanding of scientists on climate change and nuclear power, no matter how odd their opinions or how few agree with them. The media feel that they are covering the political controversy, but readers assume they are covering the scientific controversy, giving credence to both sides. And then there are those who get their information from unapologetically biased sources, which consciously or unconsciously make claims that sound scientific, but are no more so than Creationism.

What We Can Do
While we all want to do something about climate change, I’m not sure that we can move as fast as we would like. The one thing in our immediate control is to continue reducing our own greenhouse gas footprint. This helps reduce our cognitive dissonance (if I believe the climate is important, then I want to live as if it were important) and gives us better understanding of policies that encourage us to change our behavior.

Harder but more urgent is to begin working with society to encourage implementing good policies. Before we can accomplish much, however, two steps seem critical: move our planet’s accelerating climate change and the need for a steep cost on greenhouse gas emissions onto the list of what we all pay attention to. And secondly, tone down the rhetoric: instead of polarizing the discussion by attacking those who disagree with us, start questioning and testing our own assumptions and those of like-minded people in our group. Working with like-minded people, to help bring the discussion of controversial social issues to a better place, can be difficult; it is also where we are most likely to be successful.

Both steps require us to consider which sources are trustworthy, and to study those that point to possible errors in our thinking. Learning that we might be wrong feels awful, but it’s in a good cause, increasing the chance we will find actual solutions to problems such as climate change.