A Guest Post by Andrew Needham
In the wake of President Obama’s announcement of the new EPA guidelines aimed at cutting carbon emissions from coal-burning power plants by 30% by 2030, I’ve been growing increasingly frustrated about how this story is being reported. We’ve gotten a heavy dose of political analysis about how this ruling will affect the midterms –reporting that focuses on the short term political effect of a policy meant to create long-term changes, the type of reporting that, as Jon Chait writes, is an excellent proxy for figuring out which newspapers think their readers are idiots. There has also been some pretty good reporting from coal country, like the story on NPR this morning from Greene County, Pennsylvania, where people legitimately wondered, in heart rending fashion, what they will do when the area’s coal mines see a 30% cut in production.
What’s not being covered at all is how we got in this situation. Some of this is the personal frustration of someone who just finished a book about the rise of coal-burning power plants. I know some things. Why aren’t they talking to me? But beyond my personal pique, there are some pretty big historical questions that have gone begging, not only in the reporting of this specific policy decision, but in the way that climate change gets covered more generally. And they’re questions that are central to figuring out strategies that are not only effective but just. Questions like “why does the US generate electricity by burning coal in the first place?” “when did this start?” and “why don’t most people living in cities and suburbs see all these coal burning power plants everyone’s talking about?” The answers to those questions, as I hope to show in what I fear may be a long post, lie in the ways residents of metropolitan America have been promised boundless electricity since World War II and in the ways the source of that electricity has been deliberately located outside the daily spaces of those same people.
Social critics writing in the 1930s would be shocked by our coal burning dilemmas. Lewis Mumford, called coal the fuel of the Paleotechnic Era in 1934, the fuel to blame for “a befouled and disorderly environment” and a general “upthrust into barbarism” of abandoned mine shafts, grimy tailing piles, and the pallor of smoke that hung in the air of industrial cities.
It was electricity that was supposed to bury the “maggoty corpse” of the Paleotechnic Era. “With electricity the clear sky and clean waters come back again,” Mumford wrote. Electricity would create a new age of garden cities, alloyed metals (Mumford really liked aluminum), and “flowing energy.” By removing the necessity of locating cities near the waterways where coal was easily transported, electricity would free human work from the constraints of place, allowing people to work in “the more salubrious seats of living.”
That future obviously did not come to pass. That electrification failed to kill coal owes to two developments in the years since Mumford wrote. First, the politics and practices of metropolitan development encouraged ever increasing levels of energy consumption in American homes. Second, those politics and practices hid coal-fired electrical production in spaces far from the daily lives of those consumers.
Like so many narratives of recent urban history, the first story centrally involves the Federal Housing Administration. Until the 1930s, the ownership of large appliances such as refrigerators or washing machines remained almost exclusively the province of American households in the top 20 percent of income. Appliances remained expensive and most houses, as well, were wired for only limited amounts of electricity. Normal wiring was sufficient to provide power for lighting but did not possess the heavy circuits necessary to support large appliances. Fewer than 25 percent of American homes owned more than one small appliance such as irons or vacuums.
The FHA democratized appliance ownership and household electrical use. Title I of the National Housing Act guaranteed loans for home modernization and appliance purchases, in much the same way as the act guaranteed home mortgages. The most important change, however, was the institutionalization of a uniform set of expectations about home electrification. The FHA underwriting manual allowed mortgage insurance only in homes with multiple electrical outlets distributed throughout the house to allow use of portable appliances and sufficient illumination. Furthermore, it mandated that, for mortgages to receive FHA guarantees, homes should have separate circuits for light and appliances, insuring that home wiring would be able to handle increased electrical loads. The FHA, as Ronald Tobey writes, “effectively created the national mass market for electrical modernization of the home.”
The FHA guidelines made increased electrical capacity a condition of new home construction for the vast number of new suburban homes built after World War II. This change opened a vast new market for electrical utilities. Previously, utilities downplayed the importance of residential construction in growing their customer base, focusing instead on industrial users who promised to be large electrical consumers. Now, however, electric utilities turned to residential consumers with gusto. Industry journals filled with articles titled “Sell or Die” and “Sell – and Sell – and Sell.” Utility ads touted the benefits of electricity for modern family live using the friendly cartoon figure “Reddy Kilowatt,” who promised ever increasing supplies of electricity.
Embracing the expectations of postwar domesticity, they encouraged men to buy their wives “electric valentines,” appliances that would make women’s lives easier, letting “her live better electrically.”
Utilities worked with homebuilders, offering free vacations to builders who offered “the utmost in modern electric living.” Appearing at the Central Arizona Home Builders Association in 1957, one utility executive suggested installation of an “electric barbecue and rotisserie” and the placement of exterior outlets that would allow homeowners to use “lights, radios, even televisions” on their patios.
Appliance manufacturers followed similar scripts. The Appliance Manufacturers Association told housewives to sit on the couch and “relax while an automatic washer does your work.”
And General Electric, Westinghouse, and Whirlpool sponsored the “Live Better Electrically Medallion,” sending their spokesmodels Betty Furness, Fran Allison, and Ronald Reagan around the country to tour and promote homes that met “modern standards for wiring, appliances, and lighting,” all of which “means a wonderful new life for you and your family!”
The relationship between electrification and residential modernity led to dramatic increases in residential electric consumption in the postwar years. In Phoenix, Arizona, the city I write about, the average home in 1945 consumed 1453 kilowatt hours of electricity annually. By 1960, that number had jumped to over 5500 kwh annually. During the 1960s, the decade during which refrigerated air conditioning largely replaced swamp coolers in even lower priced homes in Phoenix, residential electricity consumption jumped again, to over 11,000 kwh annually. Given the dramatic postwar population growth of Phoenix – the jump of its SMSA population from approximately 200,000 at war’s end to more than one million people by 1970 – residential electrical consumption jumped from 86,500 kwh annually to 2,408,500, an increase of almost 3000%.
In short, boundless electricity became a promise of modernity in metropolitan America. The question unanswered was that electricity’s source.
That most of the electricity to meet the burgeoning demand of all those new, electrically intensive would come from burning coal was not a foregone conclusion. As Mumford’s words earlier in this post suggest, coal was broadly considered a fuel of the past. And his words reflected the status of coal both culturally and materially. In the 1930s, hydroelectric dams had become icons of American modernity with a photo of Fort Peck Dam gracing the inaugural cover of Life magazine.
Following World War II, enthusiasm for the peaceful use of atomic power led to promises that electricity “too cheap to meter” would accompany the new age of atomic cars.
Coal use, by contrast, dwindled as railroads switched to diesel power and manufacturers shuttered power plants in favor of using utilities’ electricity. American coal production hit its lowest point of the twentieth century in 1954, when only 392 million tons of coal were extracted, off more than 200 million tons from its earlier peak. In some regions, it was barely used at all. Only 5% of the West’s electricity was generated by coal in 1953.
Even at coal’s nadir, however, some trends suggested its resurgence. Promising dam sites quickly became developed. An emerging environmental movement successfully prevented others from being developed. Atomic energy proved far more expensive than its enthusiasts anticipated. Rather than creating electricity too cheap to meter it required continued public subsidy. Natural gas supplies proved difficult to extract with the technology of the 1950s.
Most importantly, changes in electrical transmission technology favored the use of coal. The years immediately following World War II saw increases in the carrying capacity of power lines. Transmission lines carry power in proportion to the square of their voltage. If voltage is doubled, four times as much power flows. And while capacity increased exponentially, cost rose only geometrically. Thus, a 138-kv line that could carry 150,000 kilowatts cost $60,000 a mile to build in 1955, while a 345-kv line cost $100,000 a mile but could carry 900,000 kilowatts.
This change suggested a new geography of electrical supply. Previously, power plants had been located largely in metropolitan areas. Now, they could be located far away, shipping “coal by wire” to distant consumers.
Shipping “coal by wire” allowed the negative effects of burning coal to be exported to spaces far from the cities and suburbs where the majority of consumers lived. As “quality of life” became the way metropolitan boosters sold their areas to both prospective residents and companies, such a system had broad advantages. Metropolitan areas could keep the implicit promises of boundless energy without the smoke and ash piles that befouled industrial cities. Metropolitan consumers could continue enjoying the benefits of the coal age apparently without its costs.
That geography, constructed from the late 1950s onward, created the world that we live in today. In the 1960s, utilities built 328 new coal-fired power plants. Since 1970, they have built 594 more. Some of these were built in population centers. The vast majority, however, were built in rural areas: in eastern Montana, northern Arizona and New Mexico, near Texarkana, in West Virginia, and eastern Kentucky. It is these areas, where coal mining and power plant employment form the be-all, end-all of the local economy, that will be hardest hit. And with some exceptions, the fate of those places is being ignored in the stories about Obama’s plan. This is not to call for a defense of coal. Clearly, the age of coal must decisively end. It is to say that those of us who live in cities and suburbs do owe responsibility to those places and those people who have for half a century enabled our lifestyles.
This means, among other things, changing the stories we tell about the relationship between cities and climate change. Right now, cities are the victims. Stories about the progress and frustration of the rebuilding efforts after Hurricanes Katrina and Sandy have become evergreens to be rolled out on slow news days in New Orleans and New York. Stories about the now routine flooding in Miami and calls for immediate preparation to forestall the potential abandonment of cities accompanied the horrifying news about the rapid melting of the West Antarctic ice sheet. “Resilience” has become the new buzzword in urban design. My own city of New York just announced it’s getting $355 million in federal money to build a series of storm resistant raised berms that will create a giant U in lower Manhattan. Designed by a Danish architecture firm, they will undoubtedly both protect the city and provide recreational spaces for the overwhelmingly wealthy population they protect.
Obviously, there are lots of people living in cities who are the victims and potential victims of climate change. Even if you don’t believe Katrina and Sandy have anything to do with anthropomorphic climate change, it’s pretty clear that rising waters are going to hit folks in the Lower 9th Ward and the Rockaways harder than folks who have berms preemptively built for them.
What’s been largely missing, I think, in all the stories about climate change, is how the energy intensity of the metropolitan built environment and the assumptions about modernity by many who live there made cities agents of climate change as well as its potential victims. As urban historians, we have a unique ability to begin to fill in that missing part of the story. And if, as citizens, we need to move, as Zadie Smith insists, from questions of “what have we done” to “what can we do,” those of us who write about cities can provide better answers to that first question, answers that will allow people to address the second with a better sense of the inequalities and responsibilities that need to be redressed in order for responses to climate change to be both effective and just.
Andrew Needham is an associate professor of history at New York University. His forthcoming book from Princeton University Press, Power Lines: Phoenix and the Making of the Modern Southwest, explores the role of coal on Native American land in the development of the metropolitan Southwest.