The Conversation
Turnbull is pursuing 'energy certainty' but what does that actually mean?
Today Malcolm Turnbull met with energy retailers to discuss high power prices, for the second time this month. The retailers agreed to try even harder to inform their customers of cheaper contracts, but they also took the opportunity to call yet again for urgent commitment to a Clean Energy Target (CET).
The prime minister hopes to deliver a CET by Christmas, but has not indicated what the target would actually be.
This is just one more step towards the elusive goal of certainty in the energy market, which politicians, the energy industry and businesses have been calling for with increasing frequency. But underlying the ongoing political scrimmage is the reality that certainty means something very different to each player. It’s particularly difficult to achieve in a time of disruptive change.
Read more: Turnbull to tell power companies: do better by customers
What is certainty?For politicians, certainty means getting energy prices and policy out of the media, ensuring construction of a new coal-fired power station, or both.
On the other hand, incumbent energy companies want to protect profits by blocking emerging competitors and guaranteeing their revenue.
For emerging energy businesses that sell renewable energy, batteries and smart energy solutions, it’s about opening markets to fair competition and finding a role in a rapidly changing environment.
For business and industry, it’s about access to stable, reliable, reasonably priced energy, so they can get on with their core business.
Households (and voters) also want affordable and reliable energy bills (and some basic respect from energy companies and politicians) but that doesn’t necessarily mean low prices: it can mean low fixed charges, access to energy efficiency programs, and finance for rooftop solar and batteries. Then they can buy less energy while living in comfortable homes with efficient appliances.
The traditional energy system involves large capital investments and long timeframes. This doesn’t sit comfortably with the agendas of many of the people described above, who want quick solutions – which can be delivered by emerging alternatives.
New solutions create new challengesAny inflexible baseload power station faces the growing problem of the “duck curve”. That is, solar power is reducing baseload demand for energy during the day, but leaving the evening peak-time demand untouched – creating an exaggerated upswing in demand after about 4pm.
Read more: Slash Australians’ power bills by beheading a duck at night
This reduced daytime power use deprives a baseload plant of the demand it needs to keep running continuously. Excess electricity during the day also drives wholesale electricity prices down from traditionally high levels. Daytime sales have comprised a large proportion of revenue for base load generators.
Large scale wind and solar without storage are price takers: they’re paid the going wholesale price at the time they generate. They have benefited from the high daytime prices that solar is now undermining, and from high prices on tradeable certificates for renewable energy, driven by shortages caused by Tony Abbott’s “war on renewables”.
Certificate prices should moderate as more renewable energy capacity is built. Future investment depends heavily on decisions regarding national and state clean energy targets beyond 2020.
Batteries, pumped hydro and other storage rely on the gap between the lowest and highest price each day. Solar is reducing, and even reversing, this price gap in the daytime. But morning and evening demand offers some opportunity, as long as excess storage capacity doesn’t flood the market with electricity and depress prices at those times. In future, they will store cheap daytime excess power for use at other times. And storage will be increasingly important as variable renewable energy capacity grows.
Improving efficiency is ‘the first fuel’Demand response, where consumers are paid to reduce demand at times of high wholesale electricity prices, is a serious threat to revenue for generators and energy storage. It usually involves smart management of consumption or use of existing backup generators, with little capital cost.
As the Australian Renewable Energy Agency has found, there is a lot of latent capacity. Its call for bids in May for its pilot scheme – originally aiming to provide 160 megawatts (MW) of reserve capacity – has unearthed almost 700MW available by December this year, and over 1900 MW by December 2018.
Our failure to properly manage demand for decades, despite the recommendations of many inquiries, has led to wasteful overinvestment in network and generation capacity that is now exposed to market forces. Now someone will pay for this policy failure: will it be shareholders or consumers?
Energy efficiency improvement adds another unpredictable factor. As shadow environment and energy minister Mark Butler commented at a recent conference, Australian governments have not performed well in this area. But the Finkel Review highlighted its substantial potential and called for governments to do better. The International Energy Agency calls energy efficiency “the first fuel” because it is so big and so cheap.
Our failure to capture energy efficiency is costly for the economy and consumers. When energy suppliers are prepared to invest in projects with risky annual rates of return of 8-15%, energy efficiency opportunities with returns of 20-100% are ignored.
Another complicating element is consumers, many of whom are no longer passively accepting energy market volatility and increasing prices. “Behind the meter” investment in energy efficiency, demand management, storage, on-site renewables and even diesel generators is making increasing sense. Indeed, social justice campaigners are increasingly calling for action to help vulnerable households be part of the future, not victims.
Lastly, there is the elephant in the room: climate change. Fossil-fuel-sourced electricity generation produces around a third of Australia’s emissions. And there is much more scope to cut emissions from electricity than from many other parts of our economy.
Where to now?No government can provide certainty for all these competing players. Each face their own risks and opportunities, and powerful disruptive forces are at work. Trying to provide certainty for some involves propping up declining business models, at the expense of positioning the Australian economy for the future.
Despite criticism from the federal energy minister states are likely to continue setting their own clean energy targets.
Businesses and households are investing to insure themselves against the policy mess and, in doing so, are transforming the energy system. Local councils and community groups are coordinating action. Emerging businesses are taking risks to capture opportunities. Existing energy businesses are trying to juggle their existing assets while transforming. State governments are trying to win votes and capture jobs in emerging industries. Meanwhile, the federal government’s party room is split over a clean energy target.
The challenge for governments is to nudge this chaotic system in ways that deliver equitable, affordable and reliable energy services.
DisclosureAlan Pears has worked for government, business, industry associations public interest groups and at universities on energy efficiency, climate response and sustainability issues since the late 1970s. He is now an honorary Senior Industry Fellow at RMIT University and a consultant, as well as an adviser to a range of industry associations and public interest groups. His investments in managed funds include firms that benefit from growth in clean energy. He has shares in Hepburn Wind.
Change Agents: Darren Kindleysides and Don Rothwell on how Australia briefly stopped Japanese whaling
The anti-whaling group Sea Shepherd has called a halt to its famous missions tracking the Japanese whaling fleet in the Southern Ocean.
For the past 12 years the group’s boats have engaged in annual high-seas battles with vessels carrying out Japan’s self-described scientific whaling program. But Sea Shepherd founder Paul Watson has admitted that Japan’s use of military-grade technology such as real-time satellite tracking has left the activists unable to keep up.
Watson also criticised the Australian government over its response to Japan’s whaling program, despite a global ban on most whaling.
Read more: Murky waters: why is Japan still whaling in the Southern Ocean?.
Scientific whaling is technically allowed under the International Whaling Commission’s treaty, and countries such as Japan have the right to decide for themselves what constitutes “scientific” in this context.
Australia is not the only government to be accused of reluctance to stand up to Japan. But in 2014, Japan’s pretext for whaling was finally discredited when Australia won a case at the International Court of Justice in The Hague. And, for a year, the Japanese whaling stopped.
This episode of Change Agents tells the back story of how that happened through the eyes of two key players, ANU legal academic Don Rothwell and Darren Kindleysides, who was then campaign manager at the International Fund for Animal Welfare. They worked on a strategy to provide both the legal argument and the political will for Australia to take on Japan in the courts.
Change Agents is a collaboration between The Conversation and the Swinburne Leadership Institute and Swinburne University’s Department of Media and Communication. It is presented by Andrew Dodd and produced by Samuel Wilson and Andrew Dodd, with production by Heather Jarvis.
Andrew Dodd does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
The Australian Greens at 25: fighting the same battles but still no breakthrough
On August 30, 1992 in Sydney, media were invited to a press conference to launch a new national political party: The Australian Greens. It was a Sunday, and no television crews bothered to turn up. One journo who did was Robert Garran from the Australian Financial Review, who reported that:
The Greens Party, representing green political groups in Tasmania, NSW and Queensland, has agreed to a constitution, and aims to contest Senate and House of Representative seats in the next federal election. The high-profile Tasmanian Green MP, Dr Bob Brown, said the party offered the electorate the choice of abandoning the two-party system, which had failed to address the nation’s problems.
Brown, who first rose to fame for his environmental campaign against Tasmania’s Franklin Dam, said his party was “more than a one-issue group”, describing its values as being “about social justice, enhancing democracy (particularly grassroots democracy), solving our problems in a peaceful and non-violent way, and about looking after our environment”.
Read more: The Greens grow up.
The launch was also reported by a rather interesting (and useful if you’re an historian/geek like me) publication called GreenWeek. Its editor Philip Luker was sceptical of the nascent Green movement’s momentum (rightly, as it turned out), offering this verdict:
Drew Hutton of the Queensland Greens is talking through his hat when he predicts green governments all over Australia in the next decade.
Almost 20 years later, during the battle over the fate of Julia Gillard’s carbon price, Brown was interviewed by The Australian. He pushed the timeframe back, predicting that “within 50 years we will supplant one of the major parties in Australia”.
Therein lies the main problem for the Greens. Many of the things they’ve been warning about have come to pass (deforestation, the climate crisis, human rights meltdowns), yet still they haven’t managed to break through with their calls for change. This is even more alarming given that the real history of the Greens precedes their August 1992 launch by more than two decades.
1971 and all thatThere was something in the air in the early 1970s. Readers of a certain vintage will remember songs like Neil Young’s After the Gold Rush (“Look at mother nature on the run, in the 1970s”), Marvin Gaye’s Mercy Mercy Me (The Ecology), and Joni Mitchell’s Big Yellow Taxi.
Even the Liberal government of the day could hear the mood music, as the new Prime Minister Billy McMahon created the short-lived Department of the Environment, Aborigines and the Arts. (Not everyone was quite so enlightened; the new department’s minister Peter Howson complained to a colleague about his new portfolio of “trees, boongs and poofters”.)
Meanwhile, a battle was raging in Tasmania over the plan to build three hydroelectric dams that would flood Lake Pedder National Park. In his fascinating and inspiring memoir, Optimism, Bob Brown wrote:
In 1971, Dr Richard Jones, his foot on a Central Plateau boulder, had seen the pointlessness of pursuing ecological wisdom with the old parties and proposed to his companions that a new party based on ecological principles be formed.
The United Tasmania Group, now seen as the first incarnation of the Green Party, contested the 1972 state election, and Jones came within a whisker of being elected.
Lake Pedder was lost, but other battles were still to be fought: green bans, Terania Creek, campaigns against nuclear power and whaling.
In Tasmania the next big skirmish was the Franklin Dam. Green activists mobilised, agitated and trained in non-violent direct action. Amanda Lohrey, in her excellent Quarterly Essay Groundswell, recalls:
An acquaintance of mine in the Labor Party lasted half a day in his group before packing up and driving back to Hobart. “It was all that touchy-feely stuff,” he told me, grimacing with distaste. Touchy-feely was a long way from what young apparatchiks in the ALP were accustomed to.
Those culture clashes between Labor and Greens have continued, despite a brief love-in engineered by Bob Hawke’s environment minister Graham ‘whatever it takes’ Richardson. To the chagrin of Labor rightwingers, the 1990 election was won on preferences from green-minded voters. But by 1991 it was clear that the Liberals would not compete for those voters, and Labor gradually lost interest in courting them.
So in 1992 the Greens went national, and so began the long march through the institutions, with gradually growing Senate success. In 2002, thanks to the Liberals not standing, they won the Lower House seat of Cunningham, NSW in a by-election, but couldn’t hold onto it.
In 2010, after receiving the largest single political donation in Australian history (A$1.68 million) from internet entrepreneur Graeme Wood, the Greens’ candidate Adam Bandt wrested inner Melbourne from Labor, and has increased his majority in 2013 and 2016.
Critics, problems and the futureDoubtless the comments under this article will be full of condemnations of the Greens for not having supported Kevin Rudd’s Carbon Pollution Reduction Scheme in December 2009. Despite Green Party protestations to the contrary, Gillard’s ill-fated carbon price wasn’t that much better at reducing emissions (though it did have additional support for renewable energy).
However, we should remember three things. First, Rudd made no effort to keep the Greens onside (quite the opposite). Second, hindsight is 20/20 – who could honestly have predicted the all-out culture war that would erupt over climate policy? Finally, critics rarely mention that in January 2010 the Greens proposed an interim carbon tax until policy certainty could be achieved, but could not get Labor to pay attention.
The bigger problem for the Greens – indeed, for anyone contemplating sentencing themselves to 20 years of boredom, for trying to change the system from within – is the problem of balancing realism with fundamentalism. How many compromises do you make before you are fatally compromised, before you become the thing you previously denounced? How long a spoon, when supping with the devil?
You’re damned if you do, and damned if you don’t. Focus too hard on environmental issues (imagining for a moment that they really are divorced from economic and social ones) and you can be dismissed as a single-issue party for latte-sippers. Pursue a broader agenda, as current leader Richard Di Natale has sought to do, and you stand accused of forgetting your roots.
Can the circle of environmental protection and economic growth ever be squared? How do you say “we warned you about all this” without coming across as smug?
As if those ideological grapples weren’t enough, the party is also dealing with infighting between the federal and NSW branches, not to mention the body-blow of senators Scott Ludlam and Larissa Waters becoming the first casualties of the ongoing constitutional crisis over dual nationality.
The much-anticipated breakthrough at the polling booth failed to materialise in 2016. Green-tinged local councils work on emissions reductions, but the federal party remains electorally becalmed.
The dystopian novel This Tattooed Land describes an Australia in which “an authoritarian Green government takes power and bans fossil fuel use”… in 2022. It still sounds like a distant fantasy.
Devastating Himalayan floods are made worse by an international blame game
Devastating floods in Nepal have sparked regional tension, with Nepali politicians and media outlets claiming that Indian infrastructure along their shared border has left Nepal vulnerable.
In a visit last week to India, Nepal’s prime minister Sher Bahadur Deuba released a joint statement with Indian prime minister Narendra Modi pledging to work together to combat future flood disasters. But relations between the two countries remain strained, and many in Nepal still resent India for a three-month blockade of supplies in the wake of the 2015 earthquakes.
Read more: Two years after the earthquake, why has Nepal failed to recover?
One source of this tension is simply the geography of the Himalayas, where a dam or road built in one country can cause inundation in its neighbour.
The result is an international blame game, with India, China and Nepal accusing each other of shortsighted and self-interested politics. Without region-wide organisations to effectively share information and coordinate disaster relief, many more people have suffered.
Tangled geographyFloods are almost annual events in the Himalayas. Huge rivers originating in the Himalayas pass through the densely settled Terai flats that span both India and Nepal, and these rivers swell enormously in the monsoon season.
A rough outline of the Himalayas.But this year’s floods have been particularly devastating. In the past two months more than 1,200 people have been killed and 20 million others affected by floods in Nepal, India and Bangladesh.
These trans-border floods are a political as well as a logistical problem. In the case of the recent floods, Nepal’s Ministry of Home Affairs pointed to two large Indian dams in the Kosi and Gandaki rivers, as well as high roads, embankments and dykes built parallel to Nepal’s 1,751km border with India, arguing that this infrastructure obstructs the natural flow of water.
India, for its part, has blamed Nepal for creating floods in the past and – although disputed in scientific circles – many believe deforestation in Nepal contributes to water overflow into India.
The problem is that infrastructure in one country can have a serious impact on its neighbours, especially in monsoon season. At least a dozen people were injured last year in clashes over an Indian dam that the Kathmandu Post reported will inundate parts of Nepal when completed.
And the problems aren’t just caused by dams. Hydrologists and disaster experts in Nepal claim that recent floods have been worsened by significant illegal mining of the low Churia hills for boulders and sand, for use in the rapidly expanding construction sector in India.
India, China and NepalThe disputes aren’t limited to India and Nepal. India and China signed a deal in 2006 to share hydrological information on the huge rivers that run through both their territories, so as to cope better with annual flooding. But earlier this year India’s Ministry of External Affairs accused China of failing to share vital data, exacerbating floods in India’s northeast.
This is not an isolated incident. In 2013 a huge flood in northwestern India, called the Himalayan tsunami, killed around 6,000 people and affected millions more.
At that time, India officials claimed that they did not get information from Nepalese officials on heavy rainfall in Nepal’s hills, or on glacier conditions. Nepali officials, in turn, responded that China is in a better position to share information about climatic conditions on that part of the Himalayas. Studies conducted later concluded that efficient information sharing and early warnings would have reduced the resulting damage.
This problem becomes more urgent as the Himalayas come under pressure from climate change. Climate scientists have warned that “extreme floods” in the region are becoming more frequent, driven by less frequent but more intense rainfall.
It is now vital to think differently about how institutions handle these disasters. India and Nepal announced last week that they would establish a Joint Committee on Inundation and Flood Management, and a Joint Team of Experts to “enhance bilateral co-operation” in water management, which is a positive sign.
But the Himalayas urgently needs institutions with a region-wide perspective, rather that country-specific remits. These organisations can effeciently share information on weather patterns, take action to reduce overall impact of floods, and consult each other while developing infrastructures that could have trans-boundary consequences.
Human interference and myopic political action have intensified the impact of these floods. We now need every country in the region to accept shared responsibility and commit to helping those affected, regardless of their nationality.
Jagannath Adhikari does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
The world protests as Amazon forests are opened to mining
The Amazon, often described as the “lungs of the Earth”, is the largest rainforest in the world. Its extraordinary biodiversity and sheer scale has made it a globally significant resource in the fight against climate change.
But last week the Brazilian president Michel Temer removed the protected status of the National Reserve of Copper and Associates, a national reserve larger than Denmark.
The reserve, known as “Renca”, covers 46,000 square kilometres and is thought to contain huge amounts of copper, as well as gold, iron ore and other minerals. Roughly 30% of Renca will now be open to mining exploration. Renca also includes indigenous reserves inhabited by various ethnic communities living in relative isolation.
The decision, which has been denounced by conservation groups and governments around the world, comes as the unpopular Temer struggles with a crushing political and economic crisis that has seen unemployment rise above 12%.
Read more: With Dilma Rousseff impeached, Brazil is set for years of political turmoil
Political and economic turbulenceBrazil is currently in the middle of the largest corruption scandals in its history. Since 2014, an ongoing federal investigation called Operation Car Wash has implicated elite businesspeople and high-ranking politicians, uncovering bribes worth millions of dollars exchanged for deals with the state oil company Petrobas. According to the BBC, almost a third of President Temer’s cabinet is under investigation for alleged corruption.
There is no doubt that Brazil needs to find ways out of recession and unemployment. As the minister of mining and energy has said, “the objective of the measure [to allow mining] is to attract new investments, generating wealth for the country and employment and income for society.”
However it’s not clear that this move will benefit ordinary Brazilians. This is not the first gold rush into this area, and the Amazon still has high indices of poverty and many other challenges.
During the 1980s and 90s tens of thousands of miners flocked to gold deposits in the Amazon, driven by high international prices. One of the most famous examples, “Serra Pelada,” saw 60,000 men dig a massive crater in the Amazon Basin.
These mining operations typically provided little economic benefits to the local populations. Instead, they attracted thousands of people, which led to deforestation, violent land conflicts and mercury pollution in the rivers.
In reality the Amazon and its people deserve a sustainable model of development, which takes advantage of the outstanding biodiversity and beauty of its standing forests. The historical record shows mining is likely to lead to a demographic explosion, and further deforestation, pollution and land conflicts.
The principle of non-regressionOne important aspect of international environmental law is called the “principle of non-regression”. The principle states that some legal rules should be non-revokable in the name of the common interest of humankind. Essentially, once a level of protection has been granted there is no coming back.
This principle is reflected in article 225 of the Brazilian constitution, which lays out the right to a healthy environment:
All have the right to an ecologically balanced environment […] and both the Government and the community shall have the duty to defend and preserve it for present and future generations.
The Brazilian constitution also describes the Amazon forest as a “national heritage”. It must then be treated accordingly.
Read more: Deep in the Amazon jungle, Brazil’s ‘hidden cities’ are in crisis
While the Amazon is a fundamental part of Brazil’s history, it’s also an essential part of the global battle against climate change. The Amazon contains half the worlds’ tropical rainforests, and its trees absorb and store vast amounts of carbon dioxide.
According to the Intergovernmental Panel on Climate Change, land use, including deforestation and forest degradation, is the second-largest source of global emissions after the energy sector.
Developed countries around the world have committed resources to help Brazil offset the costs of safeguarding their forests. One example is the Amazon Fund, created in 2008. It has received billions of dollars from foreign governments such as Norway and Germany, to combat deforestation and to promote sustainable practices in the Brazilian Amazon.
But with 14 million Brazilians unemployed, further assistance is required to ensure that they can protect their forests.
As well as governments, companies have also committed billions of dollars to fight climate change and support projects that reduce carbon emissions and promote energy efficiency. Most businesses have also created self-regulatory standards to ensure compliance with international laws and ethical standards.
The decision of the Brazilian government leaves us with two questions. How will the international community honour their commitments to keep global warming below 2℃, if countries begin rolling back their environmental protections? And how will companies involved in mining projects in the Amazon honour their social responsibility commitments and moral obligation towards present and future generations?
The degradation of the Amazon will affect the entire world. The clearing of the Amazon for mining will lead to the emissions of thousands of tons of greenhouse gases, furthering global warming and causing the irreversible loss of biodiversity, and water resources, as well as damage to local and indigenous communities.
Let us not take a step back towards more destruction. Rather, let us strengthen the protection of our remaining forests.
I have previously received funding from Swiss Foundations to conduct my Masters, PhD and book publication. Recently, I received funding from the School of Law, Western Sydney University, to conduct research on illegal logging. I am a Member of the World Conservation Union (IUCN) Commission on Environmental Law
Victoria is the latest state to take renewable energy into its own hands
The Victorian government’s intention, announced last week, to legislate its own state-based renewable energy target is the latest example of a state pursuing its own clean energy goals after expressing frustration with the pace of federal action.
The Andrews government has now confirmed its plan for 40% renewable energy by 2025, as well as an intermediate target of 25% clean energy by 2020. The policy, first flagged last year and now introduced as a bill in the state parliament, seeks to reduce greenhouse gas emissions by 16% by 2035.
At a general level, these actions are reflective of the increasing frustration states and territories have experienced at perceived inaction at the federal and even international levels. Neighbouring South Australia has also been pursuing clean energy, this month announcing plans to develop one of the world’s biggest concentrated solar plants in Port Augusta.
Victorian Premier Daniel Andrews has remarked that “it up to states like Victoria to fill that void”.
Read more: Victoria’s renewables target joins an impressive shift towards clean energy.
It is also, of course, a product of growing concerns regarding domestic energy security and investment confidence. Victoria’s climate and energy minister Lily D’Ambrosio said: “The renewable energy sector will now have the confidence to invest in renewable energy projects and the jobs that are crucial to Victoria’s future.”
National plans?The Andrews government’s underlying objective is to reinforce, rather than undermine, federal initiatives such as the national Renewable Energy Target and any future implementation of the Clean Energy Target recommended by the Finkel Review.
But federal Environment and Energy Minister Josh Frydenberg has apparently rejected this view, claiming that the new Victorian proposals run counter to the development of nationally consistent energy policy. “National problems require national solutions and by going it alone with a legislated state-based renewable energy target Daniel Andrews is setting Victoria on the South Australian Labor path for higher prices and a less stable system,” Frydenberg said.
Read more: Finkel’s Clean Energy Target plan ‘better than nothing’: economists poll.
A nationally consistent plan is somewhat unrealistic in view of the current fragmented, partisan framework in which energy policy is being developed. The federal government’s apparent reluctance to accept Finkel’s recommendation for a Clean Energy Target is generating uncertainty and unrest.
In this context, actions taken by states such as Victoria and South Australia can help to encourage renewable energy investment. Given that Australia has promised to reduce greenhouse emissions by 26-28% (on 2005 levels) by 2030 under the Paris Climate Agreement, it is hard to see how boosting renewable energy production is inconsistent with broader national objectives.
The renewables target rationaleMandating a certain amount of renewable energy, as Victoria is aiming to do, helps to push clean energy projects beyond the innovation stage and into commercial development. It also helps more established technologies such as wind and solar to move further along the cost curve and become more economically competitive.
Renewable energy targets aim to stimulate demand for clean energy, thereby ensuring that these technologies have better economy of scale. Under both the federal and Victorian frameworks, electricity utilities must source a portion of their power from renewable sources. They can comply with these requirements with the help of Renewable Energy Certificates (RECs), of which they receive one for every megawatt hour of clean energy generated.
Independent power producers can sell their RECs to utilities to earn a premium on top of their income from power sales in the wholesale electricity market. As well as buying RECs, utilities can also invest in their own renewable generation facilities, thus earning more RECs themselves.
Victoria’s situationVictoria’s proposed new legislation will serve an important purpose following the retirement of the Hazelwood coal-fired power plant. Renewable energy currently represents about 17% of the state’s electricity generation, and the Andrews government is aiming to more than double this figure by 2025.
This year alone, Victoria has added an extra 685MW of renewable generation capacity, creating more than A$1.2 billion worth of investment in the process. If the new legislation succeeds in its aims, this level of investment will be sustained well into the next decade.
Under the bill’s proposals, D'Ambrosio will be required to determine by the end of this year the minimum renewables capacity needed to hit the 25% by 2020 target, and to make a similar decision by the end of 2019 regarding the 40% by 2025 target.
In mandating these milestones, the state is aiming to set out the exact size of the state’s transitioning energy market, in turn giving greater investment certainty to the renewable energy industry.
Read more: Closing Victoria’s Hazelwood power station is no threat to electricity supply.
Victoria’s renewable energy scheme is designed to work coherently with the federal Renewable Energy Target, which given current usage projections is aiming to source 23.5% of national electricity consumption from renewables by 2020.
The federal government is yet to decide on any clean energy policy beyond the end of the decade, whether that be a Finkel-recommended Clean Energy Target or something else. In the absence of confirmed federal policy, the states have assumed the responsibility of accelerating renewable energy production through legislative initiatives designed to sustain and progress market development. This is consistent with federal commitments to global climate change imperatives.
It is hoped that these initiatives will act as a stepping stone for the eventual introduction of comprehensive state and federal clean energy regulation, and the advent of some much-needed national cohesion.
Samantha Hepburn does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Is Hurricane Harvey a harbinger for Houston's future?
Over the past week we have seen two major tropical storms devastate different parts of the world. First Typhoon Hato struck Hong Kong and Southern China killing at least a dozen people. And over the weekend Hurricane Harvey made landfall from the Gulf of Mexico, bringing extremely heavy rain to southern Texas and causing devastating floods in Houston.
Tropical cyclones are, of course, a natural feature of our climate. But the extreme impacts of these recent storms, especially in Houston, has understandably led to questions over whether climate change is to blame.
How are tropical cyclones changing?Tropical cyclones, called typhoons in the Northwest Pacific and hurricanes in the North Atlantic, are major storm systems that initiate near the Equator and can hit locations in the tropics and subtropics around the world.
When we look at the Atlantic Basin we see increases in tropical storm numbers over the past century, although there is high year-to-year variability. The year 2005, when Hurricane Katrina devastated New Orleans, marks the high point.
There is a trend towards more tropical storms and hurricanes in the North Atlantic. US National Hurricane CenterWe can be confident that we’re seeing more severe tropical cyclones in the North Atlantic than we did a few decades ago. It is likely that climate change has contributed to this trend, although there is low statistical confidence associated with this statement. What that means is that this observed increase in hurricane frequency is more likely than not linked with climate change, but the increase may also be linked to decadal variability.
Has Harvey been enhanced by climate change?Unlike other types of extreme weather such as heatwaves, the influence of climate change on tropical cyclones is hard to pin down. This is because tropical cyclones form as a result of many factors coming together, including high sea surface temperatures, and weak changes in wind strength through the depth of the atmosphere.
These storms are also difficult to simulate using climate models. To study changes in tropical cyclones we need to run our models at high resolution and with interactions between the atmosphere and the ocean being represented.
It’s much easier to study heat extremes, because we can do this by looking at a single, continuous variable: temperature. Tropical cyclones, on the other hand, are not a continuous variable; they either form or they don’t. This makes them much harder to model and study.
Tropical cyclones also have many different characteristics that might change in unpredictable ways as they develop, including their track, their overall size, and their strength. Different aspects of the cyclones are likely to change in different ways, and no two cyclones are the same. Compare that with a heatwave, which often have similar spatial features.
For all these reasons, it is very hard to say exactly how climate change has affected Hurricane Harvey.
So what can we say?While it’s hard to pin the blame for Hurricane Harvey directly on climate change, we can say this: human-caused climate change has enhanced some of the impacts of the storm.
Fortunately, in Harvey’s case, the storm surge hasn’t been too bad, unlike for Hurricanes Katrina and Sandy, for example. This is because Harvey did not travel as far, and weakened rapidly when it made landfall.
We know that storm surges due to tropical cyclones have been enhanced by climate change. This is because the background sea level has increased, making it more likely that storm surges will inundate larger unprotected coastal regions.
Building levees and sea walls can alleviate some of these impacts, although these barriers will need to be higher (and therefore more expensive) in the future to keep out the rising seas.
Deluge dangerHarvey’s biggest effect is through its intense and prolonged rainfall. A low pressure system to the north is keeping Harvey over southern Texas, resulting in greater rainfall totals.
The rainfall totals are already remarkable and are only going to get worse.
We know that climate change is enhancing extreme rainfall. As the atmosphere is getting warmer it can hold more moisture (roughly 7% more for every 1℃ rise in temperature). This means that when we get the right circumstances for very extreme rainfall to occur, climate change is likely to make these events even worse than they would have been otherwise. Without a full analysis it is hard to put exact numbers on this effect, but on a basic level, wetter skies mean more intense rain.
Houston, we have a problemThere are other factors that are making this storm worse than others in terms of its impact. Houston is the second-fastest growing city in the US, and the fourth most populous overall.
As the region’s population grows, more and more of southern Texas is being paved with impermeable surfaces. This means that when there is extreme rainfall the water takes longer to drain away, prolonging and intensifying the floods.
Hurricane Harvey is likely to end up being one of the most costly disasters in US history. It is also likely that climate change and population growth in the region have worsened the effects of this major storm.
DisclosureAndrew King receives funding from the ARC Centre of Excellence for Climate System Science.
I have always wondered: when do baby birds begin to breathe?
This is an article from I Have Always Wondered, a new series where readers send in questions they’d like an expert to answer. Send your question to alwayswondered@theconversation.edu.au
This question dates back to when I was a kid and no one has ever been able to answer it in a convincing way. I have always wondered: when do birds (and other egg-born creatures) take their first breath? And how do they take in oxygen before their lungs are working? Obviously since eggs squeak before they hatch, lungs are functional prior to the hatching… but when is that magical inflation-of-the-lungs moment? And how does it happen? – Gabrielle Deakin, Barcelona.
As placental mammals, our first breath of air comes after birth. But egg-born creatures like birds and reptiles don’t have an umbilical cord to feed them oxygen, so how do they breathe? And how can a chick inflate its lungs inside the egg?
First, let’s talk about the eggs themselves.
Eggs laid by birds have shells that are bumpy (at least under the microscope), made almost entirely of calcium carbonate, and have as many as 17,000 tiny pores. Because of these pores, oxygen can travel from the outside world to the embryo inside and carbon dioxide and water move out of the egg in the same way.
Lying between the eggshell and the albumen, or egg white, are two transparent membranes that prevent bacterial invasion, and also develop into a network of blood vessels. These membranes are the chorion and the allantois.
Membranes inside the egg move oxygen inside for the embryo, and pass carbon dioxide out. Pixabay, CC BYReptile eggs can either be hard and almost identical to bird’s eggs, as thin shelled as parchment, or soft and leathery. Most reptile eggs are porous to air and water, and tend to absorb more water from the outside world than bird eggs. Finally, the membranes of reptiles’ eggs are very similar to birds’, but don’t always entirely surround the embryo.
Regardless of these differences, the chorion and allantois have a network of blood vessels which act as a respiratory organ and is the first stage of “breathing” for bird or reptile embryos.
Birds actually go through three stages of breathing in the egg. Reptiles have a similar path, but they skip straight from step one to three.
Stage 1: embryonicBefore chicks or reptiles develop lungs, they still need to get oxygen and get rid of carbon dioxide. In placental mammals like humans (and some marsupials), all of this is accomplished by the mother through the umbilical cord and the placenta.
Some reptiles have leathery shells. Brad Chambers, Author providedIn birds, this gas exchange is done by diffusion (the movement of air from the outside to the inside of the egg) through the eggshell and a complex fusion of the chorion and the allantois called the chorioallantoic membrane. Reptiles also have a chorioallantoic area which functions as a respiratory organ.
In birds, the chorioallantoic membrane develops about three days after incubation begins and takes about two weeks to develop fully. It is highly vascularised (has lots of blood vessels), which allows for the free exchange of oxygen and carbon dioxide.
This membrane also plays a central role in the development of the embryo’s bones, because it transports calcium from the eggshell to the developing chick or embryonic reptile (excluding some reptiles which get some of their calcium from the egg yolk).
Stage 2: pre-hatching A bird hatching from the egg. Maggie J Watson, Author providedThe embryo doesn’t actually breathe via lungs for almost all of its time in the egg. When the embryo is getting close to hatching, a few differences between reptiles and birds emerge. In birds, a few days before hatching, the chick, which is now curled up tightly with its head stuck under one wing and its beak pointed towards the top of the egg, penetrates an air pocket or air cell at the top of the egg.
This air pocket began to form when the egg was laid. A freshly laid egg is the temperature of its mother’s body, but it soon begins to cool. As it cools, the inner shell membranes begin to shrink and separate from the outer shell membrane to form a pocket, which slowly fills with air and gets larger as the egg is incubated.
As soon as the chick breaks into this air pocket, it takes its first breath and the lungs begin to function. The air cell continues to be refilled with air through diffusion. Diffusion through the chorio-allantoic membrane is also still used, but is slowly replaced by lung activity as hatching nears. At the very end of this period, if you put your ear to the egg, you might hear some peeping sounds.
A chick peeps in the egg. Also visible is a distinctive ‘pipping’ pattern, as the chick hammers the inside of the egg.In birds, this sound is made through a structure called a syrinx as birds don’t have vocal chords. But most reptile species don’t have an egg air pocket, so they go straight to stage three.
Stage 3: post-hatchingMany egg-born creatures develop a small, sharp protuberance called an “egg tooth” (technically called a caruncle) on their beak or snout. It can be made of hard skin (like in crocodiles and birds) or be an actual extra tooth (like in some lizards and snakes), but regardless, it’s used to break through the egg and falls off or is reabsorbed soon after hatching.
A baby turtle cracks through its egg. Brad Chambers, Author providedThe chick, guided by its wing placement, uses its egg tooth to hammer the inside of the egg. First the egg “stars” (when the beak begins to crack the shell), and then it “pips” (when the beak breaks through the shell). The chick uses its feet to move around in a circle and pierce the egg. The chorioallantoic membrane begins to lose function as it dries out, and the chick then relies solely on its lungs. The chick continues to peep, which tells the parent that hatching is imminent and ensures its clutch mates hatch synchronously.
Reptiles slice through their weakened eggshells (weak now because they’ve extracted most of the calcium) with an egg-tooth on their snouts and start to breath. Some reptiles (crocodiles) also produce sounds, but unlike birds they use a larynx and vocal cords, very similar to humans.
When you get right down to it, birds and reptiles do pretty much the same thing in the egg. It’s not that surprising, as birds and some reptiles are quite closely related. They’ve all evolved specific eggs to both protect growing embryos and provide them with what they need – including air.
* Email your question to alwayswondered@theconversation.edu.au
* Tell us on Twitter by tagging @ConversationEDU with the hashtag #alwayswondered, or
* Tell us on Facebook
James Van Dyke has received funding from the National Science Foundation.
Maggie J. Watson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
WA bathes in sunshine but the poorest households lack solar panels – that needs to change
Many Western Australian householders are living in “energy poverty”, according to our new Bankwest Curtin Economics Centre research report, Power to the People: WA’s Energy Future.
Although average household spending on electricity, gas and heating is no more than 4% of income, the figure rises considerably for those on lower incomes. In particular, more than a quarter of single-parent families say they spend more than 10% of their income on energy.
Single parents in particular are far more exposed to energy poverty, a trend that has grown over the past 10 years. Around one in ten of these households spends at least 15% of their income on energy costs. In some cases, this forces them to compromise on other essentials such as food and health care.
Read more: Five things the east coast can learn from WA about energy
Rising energy costs, as well as a personal commitment to reducing greenhouse gases, are motivating many WA households to vote with their feet (or wallets) and adopt rooftop solar photovoltaic (PV) panels at a dramatic rate.
In WA, the installed capacity of rooftop solar PV has grown by 37% in the past 18 months alone. Around 25% of suitable dwellings are now fitted with solar panels. This takes WA to third place among Australian states, behind Queensland (32%) and South Australia (31%).
If this trend continues, the state’s rooftop solar PV capacity is predicted to exceed 2,000 megawatts by 2022. That’s larger than all but one of WA’s power stations.
Generating capacity from WA rooftop solar, 2016 to 2022 Projections are based on predictions from a log linear regression of total MW of rooftop solar PV capacity, and reflect the growth both in the number of installations and the average MW output per solar PV installation. Bankwest Curtin Economics Centre/Clean Energy Regulator
Similar trends are predicted at a national level, with consumer-bought rooftop solar PV expected to account for around 24% of electricity generation by 2040. This is set to make Australia one of the most decentralised electricity networks in the world, with 45% of its total generating capacity coming from “behind the meter”.
Haves and have-notsRooftop solar is a popular option, but not all households are able to take advantage of this technology. Our report reveals a clear socioeconomic gradient in household solar installations in WA.
Panels are fitted to only 7.4% of suitable homes in areas in the lowest 10% on socioeconomic indicators. That figure rises to 16% in the next-lowest 10%, and the gap widens still further as income rises. Solar installation rates are around 30% in mid-to-high socioeconomic areas.
Share of suitable WA homes with solar panels, by level of socioeconomic disadvantage Homes deemed suitable for solar PV include detached, semi-detached or terraced houses, but not strata-titled apartments or units. Bankwest Curtin Economics Centre/Clean Energy Regulator/ABS
Better incentives could boost these numbers, especially in poorer areas. The initial upfront costs deter many homeowners, while most landlords have little financial motivation to install solar on rental properties.
Read more: Poor households are locked out of green energy, unless governments help
Accessible, secure and affordable energy is essential to any well-functioning economy. And many citizens, communities and governments are acting on the imperative to move to a greener source.
Despite its huge amounts of wind and sunshine, WA lags behind other states both in committing to a clear renewable energy target and in its investment in large-scale renewable power projects.
Renewable projects under construction or at commissioning stage in 2017 Projects at the commissioning phase at the end of 2016 are not included in the total new capacity figure. Investment in the South Australia Hornsdale Wind Farm includes stages 1, 2 and 3. Data for ACT and NT not available; ACT is expected to draw most of its renewable energy from other states and territories. Bankwest Curtin Economics Centre/Clean Energy Council Australia/various other sources
According to our report, WA’s total greenhouse gas emissions in 2015 were 86.5 million tonnes of carbon dioxide equivalent – fourth-ranked behind Queensland, New South Wales and Victoria. This means WA contributed 16.1% of Australia’s national emissions that year.
But while other states and territories have adopted proactive emissions-reduction policies such as state-based renewable energy targets, WA has not yet taken substantial action on this front.
Read more: The solar panel and battery revolution: how will your state measure up?
Here’s the likely game-changer: efficient, cost-effective battery storage that can deliver power at the scale required. Storage is set to become vital, both for smoothing out domestic power consumption from solar panels and for large-scale electricity generation. The Finkel Review has recommended that all future renewable energy projects be required to produce “dispatchable” power – that is, be able to store their power and release it at times of higher demand.
Greater efficiency in balancing energy demand over the course of the day, and across large-scale grid systems that feature a range of different weather conditions, is also likely to help overcome the intermittency problems associated with renewable sources.
Australia is on the cusp of an energy revolution, and the pace of change is only going to increase. WA, like every state, needs a clear roadmap to navigate the journey effectively, one that integrates existing and emerging energy technologies and maintains protections for families who cannot currently afford solar panels.
This will give greater certainty to the energy future we can all expect – and, critically, ensure that no one is left behind.
Rebecca Cassells is a Principal Research Fellow with the Bankwest Curtin Economics Centre. The Bankwest Curtin Economics Centre is an independent economic and social research organisation located within Curtin Business School at Curtin University. The Centre was established in 2012 with support from Bankwest (a division of Commonwealth Bank of Australia) and Curtin University. The views in this article are those of the authors and do not represent the views of Curtin University and/or Bankwest or any of their affiliates.
Alan Duncan is Director of the Bankwest Curtin Economics Centre. The Bankwest Curtin Economics Centre is an independent economic and social research organisation located within Curtin Business School at Curtin University. The Centre was established in 2012 with support from Bankwest (a division of Commonwealth Bank of Australia) and Curtin University. The views in this article are those of the authors and do not represent the views of Curtin University and/or Bankwest or any of their affiliates.
Yashar Tarverdi is a Research Fellow at the Bankwest Curtin Economics Centre. The Bankwest Curtin Economics Centre is an independent economic and social research organisation located within Curtin Business School at Curtin University. The Centre was established in 2012 with support from Bankwest (a division of Commonwealth Bank of Australia) and Curtin University. The views in this article are those of the authors and do not represent the views of Curtin University and/or Bankwest or any of their affiliates.
Finkel's Clean Energy Target plan 'better than nothing': economists poll
Few topics have attracted as much political attention in Australia over the past decade as emissions reduction policy.
Amid mounting concern over electricity price increases across Australia and coinciding with blackouts in South Australia and near-misses in New South Wales, the Australian government asked Chief Scientist Alan Finkel to provide a blueprint for reform of the electricity industry, in a context in which emissions reduction policy was an underlying drumbeat.
In a new poll of the ESA Monash Forum of leading economists, a majority said that Finkel’s suggested Clean Energy Target was not necessarily a better option than previously suggested policies such as an emissions trading scheme. But many added that doing nothing would be worse still.
Read more: The Finkel Review: finally, a sensible and solid footing for the electricity sector.
The Finkel Review’s terms of reference explicitly precluded it from advising on economy-wide emissions reduction policy, and implicitly required it also to reject emission reduction policies such as an emissions tax or cap and trade scheme.
One of the Finkel Review’s major recommendations was a Clean Energy Target (CET). This is effectively an extension of the existing Renewable Energy Target to cover power generation which has a greenhouse gas emissions intensity below a defined hurdle. Such generation can sell certificates which electricity retailers (and directly connected large customers) will be required to buy.
The ESA Monash Forum panel was asked to consider whether this approach was “preferable” to an emission tax or cap and trade scheme. As usual, responses could range from strong disagreement to strong agreement with an option to neither agree nor disagree. Twenty-five members of the 53-member panel voted, and most added commentary to their response – you can see a summary of their verdicts below, and their detailed comments at the end of this article.
A headline result from the survey is that a large majority of the panel does not think the CET is preferable to a tax or cap and trade scheme. None strongly agreed that the CET was preferable, whereas 16 either disagreed or strongly disagreed, and four agreed.
Of the four who agreed, three provided commentary to their response. Stephen King preferred the CET on the grounds of its ease of implementation but otherwise would have preferred a tax or cap and trade scheme. Michael Knox agreed on the basis that the CET was preferable to the existing Renewable Energy Target. Harry Bloch unconditionally endorsed the CET.
Of the five who neither agreed nor disagreed, three commented and two of them (Paul Frijters and John Quiggin) said there was not much to distinguish a CET from a tax or cap and trade scheme. Warwick McKibbin, who disagreed with the proposition, nonetheless also suggested that the CET, tax and cap and trade scheme were comparably effective if applied only to the electricity sector.
However, closer examination of the comments suggests much greater sympathy with Finkel’s CET recommendation than the bare numbers indicate. Even for those who strongly disagreed that the CET was preferable, none suggested that proceeding with a CET would be worse than doing nothing. But eight (Stephen King, Harry Bloch, Alison Booth, Saul Eslake, Julie Toth, Flavio Menezes, Margaret Nowak and John Quiggin) commented that proceeding with the CET would be better than doing nothing. Interestingly none of these eight explained why they thought doing something was better than doing nothing. Does it reflect a desire for greater investment certainty or a conviction that reducing emissions from electricity production in Australia is important?
Seven respondents (Stephen King, Alison Booth, Saul Eslake, Julie Toth, Gigi Foster, Lin Crase and John Quiggin) alluded to the political constraints affecting the choice, of which several drew attention to Finkel’s own observations. None of these seven suggested that the political constraint invalidated proceeding with the CET.
Of the 19 economists who provided comments on their response, 16 thought a tax or cap and trade scheme better than a CET. Numbers were equally drawn (three each) as to whether a tax or cap and trade was better than the other, with the remaining 10 invariant between a tax or cap and trade.
My overall impression is that in judging Dr Finkel’s CET recommendation, most of the panel might agree with the proposition that the “the perfect is the enemy of the roughly acceptable”. I surmise that in a decade past, many members of the panel would have held out for greater perfection, but now they think prevarication is more cost than benefit, and it is better to move on and make the best of the cards that have been dealt.
In emissions reduction policy the mainstream advice from Australia’s economists has not been persuasive. But this is hardly unique to Australia, as the pervasiveness of regulatory approaches in other countries shows. Perhaps an unavoidably compromised policy that is nonetheless well executed may be better than a brilliant policy that is poorly executed. Even if they could not have been more persuasive in design, Australia’s economists should still have much that is useful to contribute in execution. Hopefully more can be drawn into it.
Read the panel’s full responses below:
This is an edited version of the summary of the report’s findings originally published by the ESA Monash Forum.
Bruce Mountain does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Antarctic ice reveals that fossil fuel extraction leaks more methane than thought
The fossil fuel industry is a larger contributor to atmospheric methane levels than previously thought, according to our research which shows that natural seepage of this potent greenhouse gas from oil and gas reservoirs is more modest than had been assumed.
In our research, published in Nature today, our international team studied Antarctic ice dating back to the last time the planet warmed rapidly, roughly 11,000 years ago.
Katja Riedel and Hinrich Schaefer discuss NIWA’s ice coring work at Taylor Glacier in Antarctica.We found that natural seepage of methane from oil and gas fields is much lower than anticipated, implying that leakage caused by fossil fuel extraction has a larger role in today’s emissions of this greenhouse gas.
However, we also found that vast stores of methane in permafrost and undersea gas hydrates did not release large amounts of their contents during the rapid warming at the end of the most recent ice age, relieving fears of a catastrophic methane release in response to the current warming.
The ice is processed in a large melter before samples are shipped back to New Zealand. Hinrich Schaefer, CC BY-ND
A greenhouse gas historyMethane levels started to increase with the industrial revolution and are now 2.5 times higher than they ever were naturally. They have caused one-third of the observed increase in global average temperatures relative to pre-industrial times.
If we are to reduce methane emissions, we need to understand where it comes from. Quantifying different sources is notoriously tricky, but it is especially hard when natural and human-driven emissions happen at the same time, through similar processes.
Read more: Detecting methane leaks with infrared cameras: they’re fast, but are they effective
The most important of these cases is natural methane seepage from oil and gas fields, also known as geologic emissions, which often occurs alongside leakage from production wells and pipelines.
The total is reasonably well known, but where is the split between natural and industrial?
To make matters worse, human-caused climate change could destabilise permafrost or ice-like sediments called gas hydrates (or clathrates), both of which have the potential to release more methane than any human activity and reinforce climate change. This scenario has been hypothesised for past warming events (the “clathrate gun”) and for future runaway climate change (the so-called “Arctic methane bomb”). But how likely are these events?
Antarctic ice traps tiny bubbles of air, which represents a sample of ancient atmospheres. Hinrich Schaefer, CC BY-ND The time capsuleTo find answers, we needed a time capsule. This is provided by tiny air bubbles enclosed in polar ice, which preserve ancient atmospheres. By using radiocarbon (14C) dating to determine the age of methane from the end of the last ice age, we can work out how much methane comes from contemporary processes, like wetland production, and how much is from previously stored methane. During the time the methane is stored in permafrost, sediments or gas fields, the 14C decays away so that these sources emit methane that is radiocarbon-free.
In the absence of strong environmental change and industrial fossil fuel production, all radiocarbon-free methane in samples from, say, 12,000 years ago will be from geologic emissions. From that baseline, we can then see if additional radiocarbon-free methane is released from permafrost or hydrates during rapid warming, which occurred around 11,500 years ago while methane levels shot up.
Tracking methane in iceThe problem is that there is not much air in an ice sample, very little methane in that air, and a tiny fraction of that methane contains a radiocarbon (14C) atom. There is no hope of doing the measurements on traditional ice cores.
Our team therefore went to Taylor Glacier, in the Dry Valleys of Antarctica. Here, topography, glacier flow and wind force ancient ice layers to the surface. This provides virtually unlimited sample material that spans the end of the last ice age.
A tonne of ice yielded only a drop of methane. Hinrich Schaefer, CC BY-NDFor a single measurement, we drilled a tonne of ice (equivalent to a cube with one-metre sides) and melted it in the field to liberate the enclosed air. From the gas-tight melter, the air was transferred to vacuum flasks and shipped to New Zealand. In the laboratory, we extracted the pure methane out of these 100-litre air samples, to obtain a volume the size of a water drop.
Only every trillionth of the methane molecules contains a 14C atom. Our collaborators in Australia were able to measure exactly how big that minute fraction is in each sample and if it changed during the studied period.
Low seepage, no gun, no bombBecause radiocarbon decays at a known rate, the amount of 14C gives a radiocarbon age. In all our samples the radiocarbon date was consistent with the sample age.
Radiocarbon-free methane emissions did not increase the radiocarbon age. They must have been very low in pre-industrial times, even during a rapid warming event. The latter indicates that there was no clathrate gun or Arctic methane bomb going off.
So, while today’s conditions differ from the ice-covered world 12,000 years ago, our findings implicate that permafrost and gas hydrates are not too likely to release large amounts of methane in future warming. That is good news.
Wetlands must have been responsible for the increase in methane at the end of the ice age. They have a lesser capacity for emissions than the immense permafrost and clathrate stores.
Geologic emissions are likely to be lower today than in the ice age, partly because we have since drained shallow gas fields that are most prone to natural seepage. Yet, our highest estimates are only about half of the lower margin estimated for today. The total assessment (natural plus industrial) for fossil-fuel methane emissions has recently been increased.
In addition, we now find that a larger part of that must come from industrial activities, raising the latter to one third of all methane sources globally. For comparison, the last IPCC report put them at 17%.
Measurements in modern air suggest that the rise in methane levels over the last years is dominated by agricultural emissions, which must therefore be mitigated. Our new research shows that the impact of fossil fuel use on the historic methane rise is larger than assumed. In order to mitigate climate change, methane emissions from oil, gas and coal production must be cut sharply.
Hinrich Schaefer works for the National Institute of Water and Atmospheric Research. He has received funding from the New Zealand Government through Strategic Science Investment Funds and a Marsden Grant. In previous positions, his work has received government funding from Germany, the European Union, Canada and the USA, as well as a grant from the American Chemical Society.
Capturing the true wealth of Australia’s waste
One of the byproducts of landfill is “landfill gas”, a mixture of mostly methane and carbon dioxide generated by decomposing organic material. Methane is a particularly potent greenhouse gas, but it can be captured from landfill and used to generate clean electricity.
Methane capture is a valuable source of power but, more importantly, it can significantly reduce Australia’s methane emissions. However the opportunity to produce energy from waste is largely being squandered, as up to 80% of the potential methane in waste is not used.
If more councils were prepared to invest in better facilities, Australians would benefit from less waste in landfill and more energy in our grids. Even the by-product from using state-of-art processing methods can be used as a bio-fertiliser.
Read more: Explainer: how much landfill does Australia have?
And while these facilities are initially more expensive, Australians are generally very willing to recycle, compost and take advantage of community schemes to reduce waste. It’s reasonable to assume that a considerable percentage of our population would support updating landfill plants to reduce methane emissions.
Recycling in AustraliaAustralia may have a bad rap when it comes to waste recycling, but there are plenty of positives.
Australians produce approximately 600 kilograms of domestic waste per person, per year – no more than most northern European countries, which set the benchmark in sustainable waste management.
Looking at kerbside bins we, on average, recycle 30-35% of that waste, saving much of our paper, glass, aluminium and steel from landfill (which also saves and reduces emissions).
Although the household recycling rate in Australia is less than the best-performing EU recycling rates of 40-45%, this is primarily due to a lack of access to (or awareness of) schemes to recycle e-waste and metals. Data therefore suggests that at the community level, there is a willingness to minimise and recycle waste.
Read more:
Australia is still miles behind in recycling electronic products
Campaigns urging us to ‘care more’ about food waste miss the point
Between 55% and 60% of kerbside waste sent to landfill in Australia is organic material. Over 65% of this organic fraction is food waste, similar to the make-up of the EU organic waste stream, comprised of 68% of food waste.
Despite this large fraction, approximately half of the household organic we produce – mostly garden waste – is separately collected and disposed, again demonstrating high participation by the community in recycling when collection and disposal options are available.
Turning waste into energyEnergy recovery from waste is the conversion of non-recyclable material into useable heat, electricity, or fuel. Solid, non-organic waste is usually converted to energy after being heated, but organic waste like kitchen and and garden refuse has too much moisture to be treated this way.
Read more: Explainer: why we should be turning waste into fuel
Instead, when organic waste is sent to landfill it is broken down naturally by microorganisms. This process releases methane, a greenhouse gas 25 times more potent than carbon dioxide, and represents the loss of a valuable energy resource.
Around 130 landfills in Australia are capturing methane and using it to generate electricity. Based on installed power generation capacity and the amount of waste received, Australia’s largest landfills use 20-30% of the potential methane in waste for electricity generation.
Ravenhall in Melbourne processes 1.4 million tonnes of waste per year, and proposes to generate 8.8 megawatts (MW) of electricity by 2020. Roughly 461,000 tonnes of waste goes to Woodlawn in NSW, and in 2011 it generated 4MW of electrical power. Swanbank in Queensland receives 500,000 tonnes a year and generates 1.1MW.
The remainder of the methane is flared due to poor gas quality or insufficient transmission infrastructure, is oxidised as it migrates towards the surface of the landfill, or simply escapes. The methane generating capacity of waste is also diminished because organics begin composting as soon as they reach landfill.
But there are more efficient ways to capture methane using specialised anaerobic digestion tanks. The process is simple: an anaerobic (oxygen free) tank is filled with organic waste, which is broken down by bacteria to produce biogas. This is similar to the natural process that occurs in landfill, but is much more controlled and efficient in a tank.
Read more: Biogas: smells like a solution to our energy and waste problems
The biogas can be combusted to produce electricity and heat, or can be converted to pure biomethane to be used either in the mains gas grid, or as a renewable transport fuel. In contrast to landfills, 60-80% of the methane potential of waste is used to generate electricity in anaerobic digesters, with most of the remainder used to power waste handling and the digestion process.
The nutrient-rich sludge that remains after anaerobic digestion, called digsetate, is also a valuable biofertiliser. It can support food production, and further reduce greenhouse gases by decreasing our reliance on energy-intensive manufactured fertilisers.
The use of food waste as a feedstock for anaerobic digestion is largely untapped in Australia but has huge potential. A site in Sydney’s geographic centre (Earth Power Technologies) and Richgro’s Jandakot facility near Perth are part of a handful that are converting food waste to energy using this technology.
The future of organic recyclingLocal council recycling and waste infrastructure is typically not a priority election issue, except for those close to existing or proposed landfills.
Read more: Australian recycling plants have no incentive to improve
Ratepayers are generally not informed of the possibility of separately collecting food waste, either for industrial-scale composting or methane capture. We have the right to this information, with costs and benefits presented in the context of the costs we already pay for waste management, and relative to the environmental performance of landfill.
As an example, landfill operators often promote the number of homes they power by electricity generated from methane as a key measure of sustainability. But how does this compare to the electricity and heat that might be obtained from an anaerobic digester that processes the same waste?
Given the choice, the Australian community may have an appetite to extend organic recycling beyond well-established garden waste composting. They only have to be asked.
William Clarke has received funding from the Australian Research Council, the Queensland Government and Remondis Australia. He is a member of the Managing Board of the International Waste Working Group.
Bernadette McCabe is a member of Bioenergy Australia and is National Team Leader for the International Energy Agency Task 37 Energy from Biogas
Curious Kids: Where does my poo go when I flush the toilet? Does it go into the ocean?
This is an article from Curious Kids, a series for children. The Conversation is asking kids to send in questions they’d like an expert to answer. All questions are welcome – serious, weird or wacky!
Where does my poo go when I flush the toilet? Does it go into the ocean? – Clancy, age 4, Austinmer, NSW.
When you press the flush button, your wee, poo, toilet paper and water go down a pipe called a sewer. The toilet flushes the wastes down the sewer pipe. The sewer pipe from your house also collects and removes other wastes. This might be soapy water from baths and showers, or water left over from washing dishes and clothes. Together, all of these wastes are called “sewage”. The pipes they travel through are called “sewerage pipes”. People sometimes get “sewage” and “sewerage” mixed up.
The wastes from your house flow downhill. They join those from other homes and flow into bigger sewer pipes. Some of these pipes are bigger than a bus! If you live in a big city the wastes from thousands of people looks like a river of sewage.
The big sewer pipes take all the sewage to a place where it is treated. This place is called a sewage treatment plant. All towns and cities have these. They are like a big factory where any harmful materials are removed. This is a very important part of our city life.
This video shows how a sewage treatment plant in England works. Flushing is fun, but there are some things you should never flush down the toilet – like baby wipes. Flickr/GoonSquadSarah, CC BYSewage contains lots of germs and if people come into contact with it, it can make them very sick. The treatment also removes things that people have flushed down the toilet. This includes things like toys, jewellery or even money. There are some things you should never flush down the toilet, like baby wipes – even if it says “flushable” on the packet – because they clump up and cause big problems for the sewerage system.
The sewage is cleaned in the treatment plant. This can take many days. It makes sure that harmful parts of the sewage are removed. Chemicals are added to kill as many germs as possible. Then the treated water is released into a local river or even the ocean. If you live near the coast your treated sewage probably goes into the ocean.
This is a bottle of recycled water from Singapore. It was made from treated sewage and is safe to drink. Flickr/Tristan Schmurr, CC BYThe treated sewage is cleaned to make sure that it does not cause environmental problems. This means that it should not harm the plants and fish that live in the river or ocean where it is released. If the sewage is not fully treated it can cause water pollution. It also should not make people sick if they swim in the river or ocean. Scientists test the water and the sewage wastes to make sure that it is OK.
Some treated sewage can be used to make energy or recycled to make water that can be used in factories or farms. Some countries, including parts of Australia, can even make water from treated sewage that is safe enough to drink. Singapore makes “recycled” drinking water out of treated sewage that is even purer than the level that the World Health Organisation (which is a group that makes a lot of suggestions about what’s healthy and what’s not) says is safe to drink.
Hello, curious kids! Have you got a question you’d like an expert to answer? Ask an adult to send your question to us. You can:
* Email your question to curiouskids@theconversation.edu.au
* Tell us on Twitter by tagging @ConversationEDU with the hashtag #curiouskids, or
* Tell us on Facebook
Please tell us your name, age and which city you live in. You can send an audio recording of your question too, if you want. Send as many questions as you like! We won’t be able to answer every question but we will do our best.
Ian Wright does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
What blackout? How solar-reliant power grids passed the eclipse test
The total solar eclipse that captivated the United States this week was more than just a celestial spectacle (and a reminder to take care of your eyes). It was also a valuable lesson in how to manage electricity grids when a crucial generation source – solar power, in this case – goes temporarily offline.
The last total solar eclipse to pass over the US was in 1979, a year when President Jimmy Carter was in the midst of the energy crisis and struggling with ballooning oil prices. In response, he made a concerted shift to greater energy independence through alternative energy sources such as solar.
In 2017, almost the whole world is grappling with the transformation of the electricity industry and the move to renewable energy.
Read more: Scientist at work: why this meteorologist is eager for an eclipse.
Eclipses have – and always will have – a lot to teach us. While this eclipse did not cause major disruption to the US electricity network, it gave system operators a better understanding of how future intermittencies can be managed.
The path of the eclipse, shown relative to the positions of major US solar power installations. US Energy Information AdministrationDespite the rapid decline and rebound in solar power output during the event, operators were able to manage without a hitch. Their thankless task reminds us of the importance of having resilient and robust electricity systems with sufficient backup capacity.
Solar plants lost around half of their ability to generate electricity during the two and a half hours of the eclipse, dipping and rising almost three times faster than the average rate at which power stations can ramp their output up and down. The shortfall was covered largely by gas-fired power plants, and extra hydro capacity.
California faced a particularly tough challenge because of its relatively high level of renewable energy; last year 10% of the state’s electricity came from solar photovoltaic (PV) power.
California’s solar output during the eclipse. California ISOGiven the recent scrutiny on Australia’s beleaguered electricity grid, it makes sense to ask how our power system would fare if faced with the same challenge. Take a walk through almost any suburb and you’ll see dozens of solar panels glinting from roofs. How much have they destabilised our grid? Would we pass the eclipse test?
System managers and market operators such as the Australian Energy Market Operator already intricately balance demand and supply levels throughout the day, and must deal with unexpected outages at power stations, extreme weather events (think of South Australia), and increasingly predict how the share of intermittent generation from renewable resources will be matched and secured.
According to the Clean Energy Council, Australian renewables provided 17% of the country’s electricity generation in 2016. In world terms that looks rather unimpressive. But this figure does not reflect the growing impact of behind-the-meter solar PV that is slowly but surely reducing reliance on grid electricity during the day.
As outlined in a previous FactCheck, Australia has the highest proportion of households with PV systems on their roof of any country in the world, at over 15%. (However our total energy produced from solar is somewhat less than Germany, Italy, Belgium and Japan, which have a propensity for larger systems).
Of course, all this distributed solar adds to the complexity for utilities and grid operators, and underpins why we have technical rules and connection standards to ensure that households connecting individual systems to the grid do not cause unintended consequences for local network areas. As the forecasts for rooftop solar installations continue to be revised upwards, AEMO nevertheless remains sanguine about the potential for grid disruption:
…it is technically feasible to integrate this amount of rooftop PV into the network over the forecast horizon, through a mix of market, network, and non-network (such as storage) solutions to address issues such as increasing variability in system demand, low daytime demand, and increased ramping at morning and afternoon electricity system peaks.
Utilities themselves are acutely aware of the “non-negotiable social contract of keeping the lights on”, as mused by Frank Tudor, chief executive of Western Australia’s regional utility Horizon Power, in an opinion piece written before the eclipse. The emboldened South Australian government may take further comfort in the fact that its newly minted 150-megawatt Aurora Solar Energy Project would come into its own during such weather interruptions (more often due to clouds than eclipses), with its capacity to store solar power in molten salt storage tanks, to be dispatched as required during peak periods.
Lean and green machinesThe eclipse also underlines how crucial the innovations in technology and data analytics will be in ensuring that electricity grids can still operate seamlessly as the share of renewable energy grows.
We are seeing this already in many small, isolated power networks across the country, where microgrids, particularly in coastal tourist towns with a proclivity for clean technology, are already pushing the limits of hosting capacity and driving utilities to explore big data solutions to assist with the integration of increased levels of solar PV.
One such example is the sky camera trial being conducted in Carnarvon, Western Australia, that will track weather patterns and anticipate cloud cover to help with grid stability. The trial is using machine learning to help predict the impact of weather on the grid, and to balance the fluctuations with other energy sources, thus helping the network to withstand such events without losing reliability.
Read more: Five things the east coast can learn from WA about energy.
With our energy systems becoming ever more distributed and decentralised, the US eclipse provides another of nature’s lessons on the need to be smart about creating resilient networks.
The next total solar eclipse for Australia will be in 2028, and will pass straight over Sydney. In the meantime, a hybrid eclipse will cross Australia’s northwest in April 2023.
Time will tell how much of an impact these events will have on our power grids. But given the importance of electricity for our health, wealth, transport and so much more, let’s hope our system operators and policy makers aren’t blindsided.
Dev Tayal also works as a strategist for Horizon Power.
Sea the possibilities: to fight climate change, put seaweed in the mix
The next stage of humanity’s fight to reduce greenhouse emissions may revolve around seaweed, according to tonight’s episode of ABC’s Catalyst, presented by Professor Tim Flannery, which asks the question “can seaweed save the world?”
With the help of me and colleagues around the world, the documentary explores seaweed’s enormous potential to reduce greenhouse gases and draw CO₂ out of the atmosphere. In the case of seaweed, that could include giant kelp farms that de-acidify oceans, or feeding algae to cattle and sheep to dramatically reduce their methane emissions.
Read more: How farming giant seaweed can feed fish and fix the climate
But while these possibilities are exciting, early adopters are dealing with unproven technology and complex international treaties. Globally, emissions are likely to keep rising, which means seaweed-related carbon capture should only be one part of a bigger emissions reduction picture.
Net negative emissionsTo stay within the Paris climate agreement’s 2℃ warming threshold, most experts agree that we must remove carbon from the atmosphere as well as reduce emissions. Many scientists now argue that 2℃ will still cause dangerous climate change, and an upper limit of 1.5℃ warming by 2100 is much safer.
To achieve that goal, humanity must begin reducing global emissions from 2020 (in less time than it takes an undergrad enrolling now to finish their degree) and rapidly decarbonise to zero net emissions by 2050.
Read more: We need to get rid of carbon in the atmosphere, not just reduce emissions
Zero net carbon emissions can come from radical emissions reductions, and massive geoengineering projects. But it could be vastly helped by what Flannery calls “the third way”: mimicking or strengthening Earth’s own methods of carbon capture.
Studies support the need to remove carbon from the atmosphere, but there are serious technical, economic and political issues with many large-scale plans.
On the other hand, seaweed solutions could be put to work in the biologically desert-like “doldrums” of the ocean, and have positive side effects such as helping to clear up the giant ocean rubbish patches. However, there are many technical problems still to be solved to make this a reality.
We probably haven’t reached peak emissionsRemoving carbon from the atmosphere is an attractive proposition, but we can’t ignore the emissions we’re currently pumping out. For any negative emissions technology to work, our global emissions from fossil fuels must start to drop significantly, and very soon.
But wait a second, haven’t we already hit peak emissions? It’s true that for the third year in a row, global carbon dioxide emissions from fossil fuels and industry have barely grown, while the global economy has continued to grow strongly.
This is great news, but the slowdown in emissions growth has been driven primarily by China, alongside the United States, and a general decline of emissions in developed countries.
China’s reductions are impressive. The country peaked in coal consumption in 2014, and tends to under-promise and over-deliver on emissions reductions. However, under the Paris agreement, China has committed to a 60-65% reduction in emissions intensity, which means there’s still room for them to rise in the future.
India’s emissions, on the other hand, are major wild card. With a population of 1.3 billion and rising, about 300 million of whom are still not connected to an electrical grid, and potential increases in coal use to provide energy, India will be vital to stabilising greenhouse gases.
Read more: To slow climate change, India joins the renewable energy revolution
India’s emissions today match those of China in 1990. A study that combined India’s Paris agreement targets with OECD estimates about its long-term economic growth, suggested India’s CO₂ emissions could still grow significantly by 2030 (although per capita emissions would still be well below China and the US).
The emissions reduction relay raceSo how do we deal with many competing and interconnected issues? Ideally, we need an array of solutions, with complementary waves of technology handling different problems.
Clearly the first wave, the clean energy transition, is well under way. Solar installations are breaking records, with an extra 75 gigawatts added to our global capacity in 2016, up from 51 gigawatts installed in 2015. But this still represents just 1.8% of total global electricity demand.
In addition to renewable energy generation, limiting warming to below 1.5°C also means we must increase the efficiency of our existing grid. Fortunately, early-stage financiers and entrepreneurs are focusing on a second wave of smart energy, which includes efficiency and optimisation technologies. Others in Australia have also noted the opportunities offered by the increasing use of using small, smart devices connected to the internet that respond to user demand.
Although early user results have been mixed, research shows better system control reduces the emissions intensity of energy generation. These energy efficient devices and optimisation software are on the cusp of becoming widely commercially available.
Critically, these efficiency technologies will be needed to complement structural change in the fossil fuel energy mix. This is especially in places where emissions are set to grow significantly, like India. Building renewable energy capacity, optimising with new software and technologies, and better understanding the opportunity for net negative emissions all play an important part in the emissions reductions relay race over the next 50 years to get us to 1.5°C.
With further research, development, and commercialisation, the possibilities offered by seaweed – outlined in more detail in the Catalyst documentary – are potentially game-changing.
But, as we saw with the development of renewable energy generation technology, it takes a long time to move from a good idea to wide implementation. We must support the scientists and entrepreneurs exploring zero-carbon innovations – and see if seaweed really can save the world.
Can Seaweed Save the World? airs on the ABC on Tuesday 22 August at 8.30pm.
Adam Bumpus receives funding from the University of Melbourne Faculty of Science, the Australian Research Council (DECRA fellowship), and received financial compensation from the ABC for time working on the documentary.
Greening the concrete jungle: how to make environmentally friendly cement
Cement is the world’s most widely used material apart from water, largely because it is the key ingredient in concrete, the world’s favourite building material.
But with cement’s success comes a huge amount of greenhouse emissions. For every tonne of cement produced in Australia, 0.82 tonnes of CO₂ is released. That might not sound like much, especially when compared with the 1.8 tonnes emitted in making a tonne of steel. But with a global production of more than 4 billion tonnes a year, cement accounts for 5% of the world’s industrial and energy greenhouse emissions.
Read more: The problem with reinforced concrete.
The electricity and heat demands of cement production are responsible for around 50% the CO₂ emissions. But the other 50% comes from the process of “calcination” – a crucial step in cement manufacture in which limestone (calcium carbonate) is heated to transform it into quicklime (calcium oxide), giving off CO₂ in the process.
A report published by Beyond Zero Emissions (BZE) (on which I was a consultant) outlines several ways in which the sector can improve this situation, and perhaps even one day create a zero-carbon cement industry.
Better recipesThe cement industry has already begun to reduce its footprint by improving equipment and reducing energy use. But energy efficiency can only get us so far because the chemical process itself emits so much CO₂. Not many cement firms are prepared to cut their production to reduce emissions, so they will have to embrace less carbon-intensive recipes instead.
The BZE report calculates that 50% of the conventional concrete used in construction can be replaced with another kind, called geopolymer concrete. This contains cement made from other products rather than limestone, such as fly ash, slag or clay.
Making this transition would be relatively easy in Australia, which has more than 400 million tonnes of fly ash readily available as stockpiled waste from the coal industry, which represents already about 20 years of stocks.
Read more: Eco-cement, the cheapest carbon sequestration on the planet.
These types of concrete are readily available in Australia, although they are not widely used because they have not been included in supply chains, and large construction firms have not yet put their faith in them.
Another option more widely known by construction firm is to use the so-called “high blend” cements containing a mixture of slag, fly ash and other compounds blended with cement. These blends have been used in concrete structures all over the world, such as the BAPS Shri Swaminarayan Mandir Hindu temple in Chicago, the foundation slab of which contains 65% fly ash cement. These blends are available everywhere in Australia but their usage is not as high as it should due to the lack of trust from the industry.
Built on the fly (ash): a Hindu temple in Chicago. BAPS.org/Wikimedia Commons, CC BY-SAIt is even theoretically possible to create “carbon-negative cement”, made with magnesium oxide in place of traditional quicklime. This compound can absorb CO₂ from the air when water is added to the cement powder, and its developer Novacem, a spinoff from Imperial College London, claimed a tonne of its cement had a “negative footprint” of 0.6 tonnes of CO₂. But almost a decade later, carbon-negative cement has not caught on.
Capturing carbonThe CO₂ released during cement fabrication could also potentially be recaptured in a process called mineral carbonation, which works on a similar principle as the carbon capture and storage often discussed in relation to coal-fired electricity generation.
This technique can theoretically prevent 90% of cement kiln emissions from escaping to the atmosphere. The necessary rocks (olivine or serpentine) are found in Australia, especially in the New England area of New South Wales, and the technique has been demonstrated in the laboratory, but has not yet been put in place at commercial scale, although several companies around the world are currently working on it.
Read more: The ‘clean coal’ row shouldn’t distract us from using carbon capture for other industries.
Yet another approach would be to adapt the design of our buildings, bridges and other structures so they use less concrete. Besides using the high-performance concretes, we could also replace some of the concrete with other, less emissions-intensive materials such as timber.
Previously, high greenhouse emissions were locked into the cement industry because of the way it is made. But the industry now has a range of tools in hand to start reducing its greenhouse footprint. With the world having agreed in Paris to try and limit global warming to no more than 2℃, every sector of industry needs to do its part.
Rackel San Nicolas is affiliated with the University of Melbourne, International Union of Laboratories and Experts in Construction Materials, Systems and Structures, the Australian French Association of Science and Technology, She receives funding from Australian Research Council.
Dry winter primes Sydney Basin for early start of bushfire season
It might feel like the depths of winter, but Australian fire services are preparing for an early start to the bushfire season. Sydney has been covered with smoke from hazard reduction burns, and the New South Wales Rural Fire Service has forecast a “horrific” season.
Predicting the severity of a bushfire season isn’t easy, and – much like the near-annual announcements of the “worst flu season on record” – repeated warnings can diminish their urgency.
However, new modelling that combines Bureau of Meteorology data with NASA satellite imaging has found that record-setting July warmth and low rainfall have created conditions very similar to 2013, when highly destructive bushfires burned across NSW and Victoria.
Crucially, this research has found we’re approaching a crucial dryness threshold, past which fires are historically far more dangerous.
Read more: Climate change to blame for Australia’s July heat
How to measure bushfire fuelOn September 10, 2013 several bushfires in Sydney’s West caused havoc well before the official start of the bushfire season. These were a precursor to fires that destroyed more than 200 properties a month later. Warm, dry winter weather had dried out the fuels in Sydney’s forests and bush reserves beyond “normal” levels for the time of year.
The timing and severity of those preseason fires were a reminder that the region’s forests are flammable all year round; they can burn whenever the fuel they contain dries out past a certain threshold.
In most forests, there is an abundance of fuel in the form of leaf litter, dead twigs, branches and logs, lower vegetation such as shrubs and grasses, as well as higher foliage and branches.
The flammability of all these different kinds of fuel depends largely on their moisture content. Leaf litter and fine dead branches on the soil surface can dry out in a matter of days, whereas logs may take weeks or months to lose their moisture. The moisture content of shrubs and tree canopies varies depending on the amount of water in the soil, so they reflect the overall rainfall and temperatures across a whole season.
The flammability of an entire forest is therefore a complex calculation of all these different kinds of fuel (both alive and dead) and their different moisture levels.
Mapping Sydney’s forestsIn a recent collaborative study, we combined data from a Bureau of Meteorology project that maps water availability levels across Australia with satellite imagery to develop new tools for mapping and monitoring moisture levels of different fuels in forests and woodlands.
We checked these tools by modelling fuel moisture levels during fires in NSW, Victoria and the ACT between 2000 and 2014, and comparing our predictions to historical bushfires.
Our research has identified critical dryness thresholds associated with significant increases in fire area. Rather than a gradual increase in flammability as forests dry out, when dead fuel moisture drops below 15% subsequent bushfires are larger. Another jump occurs when dead fuel moisture levels fall below 10%. We found similar thresholds in growing plants, although their moisture content is measured differently.
These dryness thresholds are pivotal, because they may represent the breakdown of moist natural barriers in landscapes that prevent fires from spreading. Understanding these mechanisms makes it possible to predict fire risk much more accurately.
As part of this project we compared the fuel moisture in Sydney Basin’s forested areas in 2013 and 2017. As shown in the chart below, currently the live fuel moisture level is tracking well below the 2013 values, and is approaching a crucial threshold (indicated by the dotted line).
The moisture content of dead fuel has been more variable, but it has also dipped below the 2013 curve and, if warm dry weather continues, could reach critical levels before the end of August.
The median predicted dead fuel moisture content and live fuel moisture content in forest areas of the Sydney Basin Bioregion in 2013 and 2017. Black dashed horizontal lines indicate fuel moisture threshold values. The start dates of major fires in 2013 are indicated by orange vertical lines. Author provided, Author providedIn another worrying sign, mapping shows critically dry live fuel is much more abundant in 2017 than it was in 2013.
Remotely sensed live fuel moisture content in forest areas of the Sydney Basin Bioregion in July 2013 (left) and July 2017 (right). Click to enlarge. Author providedIt’s clear that much of the Sydney Basin is dangerously primed for major bushfires, at least until it receives major rainfall. Forecasts for windy but largely dry weather in coming weeks may exacerbate this problem.
These new insights into landscape-scale fuel dryness provide a powerful indicator of what might be expected. They also build our capacity for week by week monitoring of fire potential.
Preparation by both fire management authorities and exposed homeowners is now an immediate priority, to cope with the strong likelihood of an early and severe fire season.
Matthias Boer receives funding from The Australian Research Council and The Bushfire & Natural Hazards Cooperative Research Centre
Rachael Helene Nolan has received funding from the Australian Research Council.
Ross Bradstock receives funding from the ARC, the Bushfire and Natural Hazards CRC, the NSW and Victorian Governments
Where to take refuge in your home during a bushfire
When you live in a bushfire-prone area you can’t ignore the danger. Most individuals and families address this necessity by preparing a bushfire survival plan. The best way to survive a bushfire is not to be there when it arrives.
For most Australian fire agencies the “leave early” policy has largely replaced the previous “stay and defend or leave early” one. This reflects an emphasis on preserving human life during a bushfire event – an emphasis that has strengthened since the 2009 Black Saturday bushfires.
Read more: How to prepare your home for a bushfire – and when to leave
Even when planning to leave early, unexpected events can occur. Not being able to find a child or family pet may delay departure until it’s no longer safe to travel. Taking refuge in your home then becomes a last resort, a worst-case scenario. But this contingency is worth considering as part of your bushfire survival plan.
If you do need to take refuge inside your home during a bushfire, which parts are likely to be the safest? As part of my PhD research, I asked 252 residents living in bushfire-prone areas which parts of their houses they would shelter in during a bushfire, which parts they would avoid, and why. I then analysed the features of these locations against the known places where people died in their home during bushfires in Australia from 1901 to 2011.
Determining the safer places to shelter is further complicated as all houses are not the same. There are many different types, with large variations in design, construction materials, location and surrounding vegetation. It is therefore not possible to give absolute answers on where people should take shelter in their homes during a bushfire, but some general guidelines can be given.
Where are the safer spaces to shelter?Upstairs is generally a more dangerous space to seek shelter during a bushfire. Upstairs levels are more difficult to escape from. Often they have large windows and sliding glass doors which are designed to capture views, but due to radiant heat and strong winds can crack and implode. Upper levels are often constructed of lightweight materials that are more flammable and vulnerable to direct flame contact from burning trees.
The ground floor is generally a safer space to shelter. The ground level usually has more external doors from which the occupant can escape. On a sloping block, however, the easiest level from which to exit may be the first floor. The ground level often has smaller windows (except those leading to entertainment areas). From the ground floor it is easier to get to the driveway and closer to an external water source such as a water tank.
People often suggest the bathroom as a good place to shelter during a bushfire. However, the bathroom can also be dangerous. During a bushfire, mains water is often cut or the pressure is reduced to a trickle. Despite having tiled walls, non-combustible fittings and a water supply, bathrooms like other rooms are vulnerable to the collapse of a burning ceiling when embers have ignited in the roof cavity.
Most bathrooms do not have an external door that residents can use to exit the house. In a bathroom it can be difficult to see the progress of a fire. And as bathrooms are small enclosed spaces they may be more vulnerable to carbon monoxide poisoning.
Read more: Low flammability plants could help our homes survive bushfires
My advice is to look at all the external ground floor doors (while remembering that glass doors can be dangerous because of their vulnerability to radiant heat), and determine which of them provide access to adjoining outside paved, gravel, concrete or other non-combustible areas. You should also see if there is a small window from which you can observe the progress of the bushfire, and if there is a sink close by to store water. Where possible consider installing a fire alarm that has a carbon monoxide sensor with audible and visual alerts.
When you have identified the most suitable place in the house to actively shelter during a bushfire, follow the bushfire preparation activities provided by fire authorities. Some of these will include looking out of a window to follow the progress of the fire and being aware of current bushfire updates on the radio and via mobile phone. There is no such thing as passive sheltering.
Being inside your home as the fire passes offers more protection than being outside. But it should be seen as a last resort, with leaving early the preferred action. Fire agencies work hard to inform residents of days when bushfires are likely, and to provide updates on fires that do break out. Residents in bushfire-prone areas should take these warnings and updates seriously and leave their properties when advised to do so, especially when catastrophic fires are expected.
The advice given in this article is general and may not suit every circumstance.
Douglas Brown is the Principal of Bushfire Architecture a research consultancy which provides design advice to building professionals with clients in bushfire-prone areas. He also works as a casual academic at Western Sydney University. He was the recipient of a PhD scholarship from Bushfire CRC.
Noise from offshore oil and gas surveys can affect whales up to 3km away
Air guns used for marine oil and gas exploration are loud enough to affect humpback whales up to 3km away, potentially affecting their migration patterns, according to our new research.
Whales’ communication depends on loud sounds, which can travel very efficiently over distances of tens of kilometres in the underwater environment. But our study, published today in the Journal of Experimental Biology, shows that they are affected by other loud ocean noises produced by humans.
As part of the BRAHSS (Behavioural Response of Humpback whales to Seismic Surveys) project, we and our colleagues measured humpback whales’ behavioural responses to air guns like those used in seismic surveys carried out by the offshore mining industry.
Read more: It’s time to speak up about noise pollution in the oceans
Air guns are devices towed behind seismic survey ships that rapidly release compressed air into the ocean, producing a loud bang. The sound travels through the water and into the sea bed, bouncing off various layers of rock, oil or gas. The faint echoes are picked up by sensors towed by the same vessel.
During surveys, the air guns are fired every 10-15 seconds to develop a detailed geological picture of the ocean floor in the area. Although they are not intended to harm whales, there has been concern for many years about the potential impacts of these loud, frequent sounds.
Sound researchAlthough it sounds like a simple experiment to expose whales to air guns and see what they do, it is logistically difficult. For one thing, the whales may respond to the presence of the ship towing the air guns, rather than the air guns themselves. Another problem is that humpback whales tend to show a lot of natural behavioural variability, making it difficult to tease out the effect of the air gun and ship.
There is also the question of whether any response by the whales is influenced more by the loudness of the air gun, or how close the air blast is to the whale (although obviously the two are linked). Previous studies have assumed that the response is driven primarily by loudness, but we also looked at the effect of proximity.
We used a small air gun and a cluster of guns, towed behind a vessel through the migratory path of more than 120 groups of humpback whales off Queensland’s sunshine coast. By having two different sources, one louder than the other, we were able to fire air blasts of different perceived loudness from the same distance.
We found that whales slowed their migratory speed and deviated around the vessel and the air guns. This response was influenced by a combination of received level and proximity; both were necessary. The whales were affected up to 3km away, at sound levels over 140 decibels, and deviated from their path by about 500 metres. Within this “zone”, whales were more likely to avoid the air guns.
Each tested group moved as one, but our analysis did not include the effects on different group types, such as a female with calf versus a group of adults, for instance.
Our results suggest that when regulating to reduce the impact of loud noise on whale behaviour, we need to take into account not just how loud the noise is, but how far away it is. More research is needed to find out how drastically the whales’ migration routes change as a result of ocean mining noise.
Rebecca Dunlop receives funding from the Joint Industry Programme on E&P Sound and Marine Life (JIP), managed by the International Association of Oil & Gas Producers (IOGP), and from the US Bureau of Ocean Energy Management.
Michael Noad receives funding from the Joint Industry Programme on E&P Sound and Marine Life (JIP), managed by the International Association of Oil & Gas Producers (IOGP), and from the US Bureau of Ocean Energy Management.
Of renewables, Robocops and risky business
A while ago I asked what types of people will lead our great energy transition.
Well, some of them seem to be living in North Melbourne. Earlier this month I watched as Victoria’s Climate Change Minister Lily D’Ambrosio announced A$1 million for a hydrogen refuelling station to power zero-emission local government vehicles. The money, from the New Energy Jobs Fund, will sit alongside A$1.5 million that Moreland Council is investing over three years.
The Council hopes that rainwater it harvests from its buildings can be turned into fuel, with the help of power from its solar panels and wind turbines, which can in turn be used to run its fleet of garbage trucks. If (and it is an if) everything works, then residents get less air and noise pollution, and the council gets a smaller energy bill and carbon footprint. You can read my account of the launch here.
Of course, there are doubters. One commenter under my report wrote:
It amazes me how anybody could still think [hydrogen fuel cells] are a step in the right direction for domestic land transportation. Their inherent lack of efficiency compared to batteries, difficulty with storage, explosion risk and the cost of building the support infrastructure has been demonstrated innumerable times.
Yet Japan is planning for 800,000 hydrogen-fuelled vehicles by 2030. Are all of these governments really backing the wrong horse?
This is the nub of the problem: technological outcomes generally become clear after the fact, and rarely before. After a “dominant design” has survived the battles then hindsight, via historians, tells us it was obvious all along which type of gizmo was going to win.
Scholars have long pointed out that this is a fallacy – starting with the humble bicycle. The truth is that technological innovation is not the clean predictable process that pristine white lab coats and gleaming laboratories would have us think.
The history of technology is littered with the carcasses of superior ideas that were killed by inferior marketing (Betamax tapes, anyone?). Meanwhile there are the success stories that only happened through serendipity – such as Viagra, text messages, and Post-it notes. Sometimes technologies simply don’t catch the public eye, and their proponents withdraw them and repurpose them (hello Google Glass).
Even the most successful technologies have teething problems. Testing prototypes is not for the faint-hearted (as anyone who’s seen Robocop will vividly remember).
If there’s no clear and obvious technological route to follow, then an industry can end up “perseverating” – repeating the same thing insistently and redundantly. As these two studies show, the American car industry couldn’t decide what should replace the internal combustion engine, and so hedged their bets by flitting between various flavours of the month, from biofuels to LPG to hybrids and everything in between.
Risky businessThis is what makes Moreland Council’s choices so interesting. It might make “more sense” to wait and see, to let someone else run all the risks, and then be a fast follower, with the advantages and disadvantages that entails. But of course if everyone does that, then nothing ever gets done.
Meanwhile, if civil society is pushing for change, and a council’s own political makeup shifts (the Greens did well in the last local elections), and there are determined officers, then an experiment can be conducted. Coincidentally enough, Moreland Council’s chief exective Nerina Di Lorenzo recently completed a PhD on local governments’ attitudes to risk. Within a year or three she’ll no doubt have enough material for a post-doc.
Meanwhile, South Australian Premier Jay Weatherill seems to have lost all hope that the black hole-sized vacuum in federal energy and climate policy will ever be fixed. He has famously commissioned the world’s biggest lithium battery and, now, a long-awaited concentrated solar thermal power plant in Port Augusta.
Learning processWhat we are seeing in Moreland is a local council and its state government acting together (what academics snappily call “multilevel governance”), while further west we have another state government that has resolved to push its chips onto the green baize and spin the roulette wheel.
Will these experiments work? Will the right lessons be learned, from either failure or success (or more likely, living as we do in the real world, a mixture of both)? How can the “successful” technologies (however that is defined) be scaled up at tremendous speed, so we somehow clamber up the learning curve faster than we slither up the Keeling Curve of atmospheric carbon dioxide levels?
Can it be done? We need industrial quantities of luck, and optimism. And seriously – what do we have to lose by trying, other than the love of some vested interests?