Archive for the ‘Uncategorized’ Category

Fee and Dividend – only a tax if you want it to be

December 8, 2011

At its core, a carbon tax is a fee leveed on fossil fuels at either the mine of extraction or the port of entry into a country.  This fee then propagates to all carbon-intensive energy services and products, leaving the choice of burdening the tax to the consumer. The advantage of a carbon tax is that it is progressive; those who rely more on carbon-intensive products and services will suffer the tax more than those who don’t.  While this has the potential to preferentially harm consumers who are regionally dependent on carbon-intensive fuels, the opportunities to mitigate this complication will be more lasting than under an alternative policy mechanism for reducing carbon emissions, such as cap and trade.

The primary advantage of a carbon tax is its simplicity in reducing the volume of emissions financially.  A fee per ton of carbon is leveed, and set to increase annually.  Consumers can easily grasp that the use of carbon will become progressively more expensive, and have time to adapt to projections.  Complications arise with the allocation of the funds generated from the carbon tax.  Several people (Parry 2007, Al Gore, Congressman Larson) suggest that the most expedient way to distribute the funds is through a payroll tax deduction (1,2).  British Columbia enacted a carbon tax with payroll reduction earlier this year, and has since experienced marked economic benefits (3).  Most economists agree that a uniform rising carbon fee will be the most cost-effective way to reduce dependence on fossil fuels, and decrease CO2 emissions.  When compared with cap-and-trade, economist Charles Komanoff has shown that in a regime where a carbon tax increases by $12.50 per year (measured from 2009), emissions will be reduced 28% by 2020 (4).  By contrast, the Waxman-Markey cap-and-trade bill of 2009 projected 17% emissions reductions in the same time frame (5).

The disadvantage to allocating the funds from a carbon tax via a payroll tax reduction (or other similar measure) is that it biases the burden of the tax on those who do not have a job (and still participate in the economy), and those who work but don’t earn enough from any one job to necessitate paying income taxes (the current unemployment rate is ~8.5%, (6)).  Without including regionally dependent biases (such as eastern states’ dependence on coal vs the availability of alternative energies in western states), a payroll tax reduction preferentially burdens the poorest citizens, as they will never receive dividends from funds generated by the carbon tax.

Fee and dividend, while drastically simplistic in design, aims to close this loophole by distributing funds uniformly to all citizens.  This will encourage both financial and ecologic equity; with an equal distribution of funds generated, the power is in the hands of the citizen to decide to consume carbon-intensive products and services.
(1) Parry 2007
(2) Hansen, Storms of My Grandchildren, 2009
(3) http://www.economist.com/node/18989175
(4) komanoff.net/fossil/CTC_Carbon_Tax_Model.xls
(5) http://en.wikipedia.org/wiki/American_Clean_Energy_and_Security_Act
(6) http://www.tradingeconomics.com/united-states/unemployment-rate

In Corporations We Trust

November 29, 2011

Renewable resources plus climate change

November 17, 2011

Reliability of electricity deliverability is paramount to transitioning to alternative energy options.  Sources such as wind benefit from their inherent renewable resource capacity and enormous potential for load displacement.  The issue with harnessing intermittent resources like wind is that the electricity generation is passive; until we have the technology to store energy between peak periods of generation, there is no capacity for on-demand energy from wind resources.  Applied to the success of any one resource, this means that the “reliability of the bulk power system during this transition [to intermittent renewables] will be a critical measure of success as these efforts progress” (NERC).

The reliability of wind as an energy source depends on the “forecasting of variable generation output”; in lieu of on-demand energy generation, we at least need to be able to describe the expected capacity of generation — on-forecast generation (NERC).  This is especially necessary for long-term planning of wind generating facilities, because climate change has been proven to affect — and sometimes reduce — previously expected average wind speeds.

The most immediate impact from climate change is the increase of global average temperature.  More importantly, the ambient temperature at the poles will rise faster than that at lower latitudes.  Simplistically, wind is created when two atmospheric circulation cells (such as Hadley cells) of large temperature difference meet.  A lower temperature difference between these two cells translates to slower generated wind speeds.  Wind energy also depends on more localized wind densities.  These changes in the geographical distribution and variability of wind result in reduced capacity for energy generation (Pryor and Barthelmie, 2010).  In China, for example, researchers Guo et al (2010) studied the “average rate of decrease in annual mean wind speed”, finding that the “decrease in strong winds [on the order of 0.018m/s/yr] also may lower the potential for wind energy harvest.”  These trends cannot be extrapolated worldwide, however; changes in wind capacity can also be described by redistributed vegetation caused by climate change (Nobre et al, 2007).

The point is — climate change will affect future wind capacity, both globally and locally.  Much research is needed to be able to adequately describe the predicted state of wind generation as a renewable resource.  The reliability and economic feasibility of the sector is dependent entirely on forecasted wind speeds and anticipated variability.

Energy efficiency vs Growth

October 27, 2011

In the late 1970s through the early 1980s, the demand for electricity was growing faster than supply was being created.  The common answer to this issue by utilities was to build more operating plants.  This idea of unimpeded growth was beset on both sides: it was increasingly expensive, and the environmental effects could no longer be ignored.

Regulators understood that the answer lay in reduced consumption via efficiency measures.  As a boon for national security, “slower growth would also lessen pressure to explore for new coal, petroleum, and uranium, thus moderating the chance of environmental catastrophes” (Hirsch).  Herman Daly pointed out that a “blind faith” in growth to cure issues of “poverty, unemployment, inflation, and pollution” was myopic.  He developed the theory of steady-state economics, where efficiency and renewable energy replaced nuclear and fossil fuels.  Daly and others, such as Sant, Rosenfeld, and Lovins, saw the power of energy efficiency as a way to deal with not just environmental issues, but also those of economics and national security.  This helped policy makers become sympathetic to the ideas of efficiency and conservation for solving current issues of economics and national security — rate hikes from introducing new power plants, and diminishing energy reserves.

At the time, nuclear power was thought to be a panacea for cheap energy — low operating costs, and negligible emissions.  However, the costs for building the plants — coupled with successive retrofitting measures — drove the price to multiple times that of other sources (ie, fossil fuels), introducing massive rate hikes for customers.  Utilities in California saw that new plants weren’t necessarily needed, if efficiency measures could be taken; “PG&E executives argued in 1979 that energy efficiency expenditures made sense because they ‘lessened the need to finance new facilities at high capital costs which depress earnings'” (Hirsch).  As a result, conservation and efficiency measures were introduced as a potent tool to replace, in some cases, the construction of new power plants.

It is interesting that a similar argument is being made today — the nation is being goaded into building new power plants to meet the need of a growing population, especially coal plants.  Coal is meant to reduce dependence on foreign oil, but at what point will the environmental effects of coal be overshadowed by their economic effects, such as reduced resiliency in an unstable climate?  Economics is certainly embedded in the environment — you can’t expect to destroy your resources and still have something to trade.

Carbon capture and sequestration is often argued to hinder the effects of CO2 emissions, yet no coal plants today are actually being built with the needed infrastructure (only “CCS ready”).  The claim is that the expense is too high — consumers will see a similar “rate hike” as they did in the 1980s in response to the introduction of new nuclear plants.  If coal plants with CCS are too expensive to build, there must be a similar way to reduce demand across the board, or to make electricity consumption more efficient.  This problem was solved in the 1980s with energy efficiency, so why not now?

 

Purpa & Section 210

October 20, 2011

In response to the 1973 OPEC oil embargo, lawmakers in the U.S. sought to explore new methods for generating energy domestically.  One of the pieces of legislation that resulted from this urge to more efficiently use electricity and promote renewable methods for generating electricity was the Public Utility Regulatory Policies Act (PURPA).  Commonly referred to as the “rate reform” bill, PURPA’s primary functions were to place new restrictions on combustion of natural gas and petroleum, effectively promoting regression to dependence on coal (a domestic fossil fuel resource).  It also forced utilities to encourage efficient usage of energy among consumers, meaning reduced revenue from sales.  Most importantly, a pivotal (but largely overlooked) section of the bill required utilities to buy electricity from “qualifying” non-utility generating facilities (QFs).  This last provision (section 210) was enormously effective in encouraging creative solutions to diversifying renewable electricity generation.

Historically, there was considerable financial barrier to small (start-up) electricity generating facilities, impeding potential innovations for renewable energy.  Non-utility generating facilities were allowed to sell surplus electricity to the utilities (for distribution), but the customer — the utilities — dictated the price of sale.  This meant there was little motivation for small/alternative non-utilities to generate surplus electricity, and to do so in efficient and ground-breaking ways.  Under section 210 of PURPA, non-utilities were ensured a fair price for the sale of their electricity — the avoided cost of generating electricity at the utility.  Since the utilities were now required to buy electricity from QFs, and do so at what amounted to a replacement cost, this effectively liquidated the natural monopoly that utilities held on generating electricity.  The QFs supplied an increasingly variegated source of electricity — meeting the request by Carter to promote renewables — that was channeled through the utilities for distribution.  This suggests a system in transition, where the archaic “natural” utility monopolies increasingly served as a skeleton for distribution.  It seems as though the notion of monopolistic control of distribution would also naturally come into question.

Decentralized electricity generation is arguably a net benefit to the nation; the results have been increased usage efficiency (decreased waste), increased generation innovation (progressively less reliance on fossil fuels, although still substantial), and increased supply resiliency (the system is less affected by the outage of a small generation station).  Can the same logic be applied to the distribution of electricity?  In general, centralized distribution is more efficient than decentralized; consumers don’t need to have multiple connections to the electricity grid for different suppliers of energy, nor is the physical structure of the grid overly redundant.  In theory, if the maintenance of the transmission (power lines) is centralized, meaning no unnecessary redundancy, the system should be able to be adapted to decentralized distribution.  This is becoming more apparent with the push for legislation such as community choice aggregation; the benefits of decentralized distribution are growing more apparent.

 

looking at biofuels holistically

October 13, 2011

In order to adequately evaluate potential renewable alternatives to fossil fuel energy, it is necessary to weigh appropriate benefits with costs, optimizing for reduced environmental impact.  Life-cycle assessment (LCA) is a method by which the impacts through the entire product life cycle are considered to help give a more comprehensive view of the environmental trade-offs of one product or another.  Applied to alternative energy generation, LCA can be used to account for GHG emissions of various processes for a given mechanism.  Biofuels have been proposed as an alternative to fossil fuels because it is generally thought that the net emissions of GHG would be zero.  The burning of conventional fossil fuels “puts carbon into the atmosphere that would otherwise remain safely stored underground in crude oil” (1).  Fuel from currently maintained crops is more of a closed-loop system; the plants are grown (carbon sink) for the expressed purpose of being transformed into biofuels (carbon source).

Certain crops are more efficient at being converted to a consumptive form of fuel for transportation.  For example, cellulosic-derived ethanol is considered to be the only biofuel to provide a net reduction in GHG emissions (2).   The issue with using crops for fuel comes more from displacing land predestined for food usage or other functions for use primarily as fuel.  Current crop production is engineered to meet the needs of the population, so it follows that “biomass production for fuels could induce deforestation or could displace existing products from land currently used for food, forage, and fiber” (3).  It would be ideal to “produce biofuels on marginal or degraded land” (3), but this neglects the fact that land with any currently sustained vegetation is a carbon sink.  Repurposing that land removes the pre-existing sink.  Searchinger et al point out that while “plants take carbon dioxide out of the atmosphere… using land for biofuels sacrifices other benefits of keeping land in its existing use” (1).  Most lifecycle analyses neglect this issue of net increase in emissions from displacing land for biofuels — the danger comes with expanding biofuels that effectively “trigger land use changes with greenhouse gas emissions that overwhelm the benefits” (1).  LCA can only provide answers to this question if it is applied holistically.  In the case of biofuels, land use cannot be ignored; perhaps a more distributed decision process for implementation will be more appropriate for this particular resource.

(1) Searchinger et al
(2) Farrell et al
(3) McKone et al

week 7

October 10, 2011

Corporations, as defined by Citizens United, have the same rights as any other citizen in our nation.  However, they aren’t required to share responsibilities in the same way as normal citizens, nor is the issue of corporate citizenship addressed equitably in all nations.  Contrary to persons, corporations’ primary interest is to increase profits in any way possible.    Michael Watts begins to address the issue of corporate responsibility with respect to human rights in his article, “Righteous Oil? Human Rights, the Oil Complex, and Corporate Social Responsibility (CSR).”  He claims that there is an increasing dependence on human rights issues for corporations, and on the “image” of a corporation as being socially acceptable; “the recent questioning of the ethics of U.S. businesses on American soil has been matched by the rise of a global CSR movement, itself a product of both domestic and international forces arising from increased capital flows (especially direct foreign investment) and the deregulation of trade and investment” (Watts).  When the burdens of corporate actions are shouldered unfairly on society, the result is that these corporations are seen as unfavorable, and will lose potential profits.  Corporations have proven they will not claim their share of responsibility unless society coerces them through economic measures.  Watts argues that this sort of voluntary regulation has begun to happen in the United States in recent decades, but that for the most part, regulatory standards are nonuniform at best, and nonexistent for the most part.

I would argue that an appropriate example of this is the debate over the proposed Keystone XL pipeline.  Many companies, such as Koch Industries, stand to make large profits off this pipeline (1).  There is enormous social resistance to this project, notably the peaceful protests at the White House in mid September (2).  The public is trying to make their wishes heard — they don’t want expansion of oil dependence, especially in the exceedingly dirty form of tar sands.  While some government officials agree that the pipeline shouldn’t be built, the State Department has green-lighted the project in a recent environmental review (2).  This may be an example of how corporations, resisting pressure to take on social responsibility, manage to reduce their need for CSR through greenwashing.  Watts notes that “companies use minimal standards of their own setting to establish credibility with shareholders, governments, and multilateral donors”.  Very likely, the companies that stand to profit off the pipeline have convinced those who wrote the environmental review that the project would not have detrimental environmental impacts.  The problem with this arises from conflicting definitions of detrimental environmental impacts.  Those who will live close to the pipeline would argue that the daily negative impacts on their lives are immense, and those who protested at the White House would argue that the environmental impacts of reliance on dirtier oil are exceedingly negative.  This suggests that CSR means nothing until regulations are uniformly defined, and potential impact is uniformly assessed by both corporations and society.

(1) http://insideclimatenews.org/news/20111004/koch-brothers-koch-industries-flint-hills-financial-interest-canada-energy-board-keystone-xl-pipeline

(2) http://www.npr.org/2011/09/01/140117187/for-protesters-keystone-pipeline-is-line-in-tar-sand

Yergin’s Paradox

September 29, 2011

In an article posted last week in the Wall Street Journal, Daniel Yergin makes a categorically aberrant claim that peak oil, as it has come to be defined by Hubbert, is a fallacy.  His main argument is that Hubbert’s peak oil logic does not account for evolving techologies, and adaptive global politics.

The peak oil theory is a scientific construct that defines a finite resource, and the economic response to its eventual depletion.  Hubbert applied this logic to global oil reserves, saying that (1) the laws of physics demand there to be a finite amount of any given resource, and (2) once half the resource is depleted, the energy return on investment (EROI) for extraction/usage increasingly diminishes.  Simple physics, simple economics (Occam’s Razor was obviously applied in this logic).

Yergin claims that Hubbert was short-sighted to simplify the logic behind his theory of peak resource.  He suggests that global oil production will actually plateau, and not decline for nearly half a century, at which point “that decline [will] not come from a scarcity of resources, but from greater efficiency, which will slacken global demand.”  In essence, Yergin makes the broad assumption that demand-side economics (reduced consumption via efficiency measures) will drastically alter our projections for oil reserves.  This is not his primary evidence for “plateau oil”, but it neglects application of the Jevons paradox — technological progress that yields efficiency actually increases consumption (demand)  rather than reducing it, as Yergin claims.

The next flaw in Yergin’s argument comes from his loose definition of “oil” in the context of quantifying world oil production.  He heedlessly defines oil in the scope of “peak oil” as any potential liquid hydrocarbon that can be extracted from the planet.  While this definition is necessary for understanding potentially usable oil — how we won’t need to transition entirely away from oil immediately — it neglects to account for the basis of Hubbert’s logic.  The basic tenet of peak oil is that there is a finite amount of any given resource.  There is a finite amount of “sweet oil” (low sulfur content) and other conventional oils, there is a finite amount of heavy oil, there is a finite amount of potential for extracting hydrocarbons from gas and carbon, and there is a finite amount of potential from oil shale.  This can be readily quantified and broken into individual oil-type.  Farrell and Brandt point out that indeed “fossil fuel resources abound.”  Yes, there is a lot of potential for the fossil fuels we haven’t yet tapped (non-conventional oil).  However, our current society is built on using conventional oil — to tap the reserves Yergin claims are readily available requires transitioning from “high quality resources to lower quality resources”.  Over at the Oil Drum, they point out that “there is simply no point speculating about vast unconventional oil resources replacing the easy flows of light sweet crude upon which our current society and economy is based” (3).  The EROI for transitionary fossil fuels is “quite simply too high”.

Basically, not all oil is created equally, so it’s inaccurate to lump it all together when quantifying future oil production.

 

The Greater Good

September 22, 2011

In his book, The Great Transformation, Karl Polanyi describes the rise of market liberalism — the mechanisms that affect a “great transformation” in the ways an economy functions in relation to society.  The primary tenet of market liberalism is that human society “should be subordinated to self-regulating markets”, the idea being that globalization is both desirable and inevitable in a world economy that is increasingly “integrated through expanded trade and capital flows.”  It was thought that the most direct way to level the world market — optimizing to allow the greatest flux of skill and materials — was to link all national economies to a gold standard; this utopian ideal was thought to “bring a borderless world of growing prosperity.”  In spite of this, the result was an “intensifying of … the nation as a unified entity” in contrast to a world of open borders, where each nation sought to “increase their use of protective tariffs, … to gain predictability in the market … and prevent vulnerability to sudden and unanticipated gold outflows.”  In essence, the transformation to market liberalism at the onset of the industrial revolution naively characterized the economy as a separate entity to which nation-states should subordinate.

The main fallacy of the liberal market is the notion that it can be completely disembedded from society.  Polanyi asserts that in fact the market is always a function of society (such as politics, religion, or social relations).  To reverse these roles — to make society a tool of the market — would mean “no less than running society as an adjunct to the market.”  This requires a market economy to turn “human beings and the natural environment into pure commodities, assuring the destruction of both.”  The second fallacy that falls from this logic is the idea that labor, land, and money can be commoditized.  In reality, none of these three work to generate a tangible product for sale on the open market; they are manipulated in being treated as goods for sale subject to the same fluctuations.  As a result, this reliance on market self-regulation “forces ordinary people to bear higher costs” — the high costs are externalized to society.

This goal of the liberal market — to disembed from society, and treat society as commodities for sale equal to goods — is inherently unstable in the real world.  The historic reaction has been to “transcend the self-regulating market by consciously subordinating it to a democratic society” — to protect that which should not be commoditized.

Neo-liberals would argue that “the expansion of government would take an oppressive” form, that “all nations have to do is trust in the effectiveness of self-regulating markets.”  In reality, the fascist impulse — an oppressive government — only arises when society is merely protected from the market by sacrificing human freedom.  This continues to treat the government as a subordinate to the market; the view is that government will seek to exploit societal freedom under the veil of protection.

In reality, this “second great transformation” is the rise of fascim — the result of market liberalism in “localized contingencies where fascist regimes were successful.”  This parallels Amory Lovin’s view of a “managed society which rules by a faceless and widely dispersed complex of warfarewelfare-industrial-communications-police bureaucracies with a technocratic ideology” (Road Not Taken).

If we treat the government as an adjunct to the economy — the free market reigning supreme — how can we expect government control to reflect anything but commoditization of society?  Polanyi starts to answer this with his “moral argument that it is simply wrong to treat nature and human beings as object whose price will be determined entirely by the market.”    The free market claims to seek the greatest wealth per capita, but the real goal should be maximizing the overall happiness (not merely measured in wealth) for the greatest number of people.  This requires further research into the societal implications of utilitarian morality.

 

What’s your definition of “clean” energy?

September 12, 2011

In Obama’s 2011 SOTU, he presents a plan for the nation to produce 80% of its electricity from “clean” sources by 2035.  To partially meet this target, he qualifies nearly everything but dirty coal as clean energy; “Some folks want wind and solar.  Others want nuclear, clean coal and natural gas.  To meet this goal, we will need them all” (1).  Each energy source is not treated equally in this framework: energy from conventional non-renewables (ie, gas) is considered 50% clean (2).  Effectively, it seems as though Obama intends to specifically target dirty coal, which may not be a step in the wrong direction, except that the there are currently no clean coal plants in operation in the US, so all coal is dirty coal.

Why is Obama targeting electricity generation as a major source of greenhouse gasses?  Electricity itself is not an energy source – it needs to be generated by simple-physics sources (wind turbines, combustion generators, etc).  Huge amounts of energy are lost in the process of generating electricity – depending on the source, the physical generation can be up to 30% efficient, and there are additional losses in transmission.  The relative combination of primary-source energy types then directly affects the amount of emissions created at the point-of-generation.  The types of “clean” sources that are non-renewable (coal, gas, nuclear) are more adapted to centralized generation stations than are renewables (solar, wind), and centralized stations are more appropriate for generating huge quantities of on-demand electricity, regardless of the application.

Amory Lovins would ask why we are even pursuing the long-term issue of centralized electricity generation.  In his seminal paper, Energy Strategy: The Road Not Taken, he claims that one of the primary barriers to more efficiently using the energy we have is appropriate application of specific energy sources.  He affirms that power-station generation of electricity is inherently and needlessly wasteful of our resources: “the laws of physics require that a power station change three units of fuel into two units of almost useless waste heat plus one unit of electricity.”  The application of this generated energy is not appropriately matched with its source; “this electricity can do more difficult kinds of work than can the original fuel, but unless this extra quality and versatility are used to advantage, the costly process of upgrading the fuel – and losing two-thirds of it – is all for naught” (3).

Obama’s 2011 SOTU implicitly targets electricity generation as a primary source of needless emissions.  Reframing the debate to include appropriate end-use applications of electricity (and inherent frameworks of electricity generation and delivery) would altogether drastically reduce emissions, while simultaneously reducing the direct dependance on non-renewable energy consumption.

(1) http://www.nationaljournal.com/whitehouse/exclusive-obama-to-declare-the-rules-have-changed–20110125
(2) http://blogs.cfr.org/levi/2011/01/26/a-new-twist-on-obamas-clean-energy-goal/#more-615
(3) The Road Not Taken, lovins – http://www.rmi.org/rmi/Library%2FE77-01_EnergyStrategyRoadNotTaken