The Conversation
The government's new gas deal will ease the squeeze, but dodges the price issue
The deal signed this week by the federal government and the nation’s biggest three gas producers will ease Australia’s gas supply squeeze, but it will do nothing to address the current high prices.
Under the contract, Shell, Origin and Santos have agreed to supply more domestic gas to avert the predicted shortfall for 2018.
In so doing, the government seemingly sidestepped the need to trigger its own powers to forcibly restrict gas exports.
Sighs of relief all round, then. But here’s the thing: neither the new deal, nor the legislation that governs export controls, actually addresses the issue that is arguably most important to consumers – the high prices Australians are paying for their gas.
Read more: To avoid crisis, the gas market needs a steady steer, not an emergency swerve
Australia has vast gas resources, and yet somehow we find ourselves with rising prices and a forecast shortfall of up to one-sixth of demand in the east coast gas market in 2018.
This is partly understandable, given that rising global demand has fuelled a lucrative export market. The primary destination is Asia, which will assume more than 70% of global demand. In geographical terms this puts Australian exporters in a very strong position, and by 2019 Australia is forecast to supply 20% of the global market – up from 9% today.
However, the strong global demand for liquefied natural gas (LNG) does not in itself provide the full explanation for rising gas prices in Australia’s east coast gas market. This is caused by a weak regulatory environment.
Policy leversThe Australian Domestic Gas Security Mechanism, which took effect in July 2017, gives the federal resources minister the power to restrict exports of LNG in the event of a forecast shortfall for the domestic market in any given year.
This five-year provision was designed as a short-term measure to ensure domestic gas supply. If triggered, it would require LNG exporters either to limit their exports or to find new sources of gas to offset the impact on the domestic market.
To trigger the mechanism, the minister must follow three steps:
formally declare that the forthcoming year has a domestic shortfall, by October 1 of the preceding year;
consult relevant market bodies, government agencies, industry bodies and other stakeholders to determine their view on the existing and forecast market conditions; and
make a determination by November 1 on whether to implement the measures.
Any export restriction implemented under the ADGSM would potentially apply to all LNG exports nationwide, including those from areas with no forecast gas shortage, such as Western Australia. The minister does have the ability to determine the type of export restriction that is imposed. An unlimited volume restriction does not impose a specific volumetric limitation and can be applied to LNG projects that are not connected to the market experiencing the shortfall. A limited volume restriction imposes specific limits on the amount of LNG that may be exported and may be applied to an LNG project that is connected to the market experiencing the shortfall.
Non-compliance with the export limits imposed on gas projects would have a range of potential consequences for gas companies. These include revocation of export licence, imposition of different conditions, or stricter transparency requirements.
The new dealThe agreement signed with the big three gas producers effectively relieves the government of the need to consider triggering the ADGSM. As such, 2018 has not been officially declared to be a domestic shortfall year.
But the agreement is not grounded upon any specific legislative provision. Therefore it is essentially only enforceable against the gas companies that are parties to it. And in accordance with the private terms and conditions that those companies agree to.
The broad agreement is that contractors will sell a minimum of 54 petajoules of gas into the east coast domestic market (the lower limit of the forecast shortfall) and keep more on standby in case the eventual shortfall turns out to be bigger.
But what about prices?The deal contains no specific provision regarding domestic pricing. So, although there will be more gas in the domestic market, this does not necessarily mean that the current high prices will drop.
In the short term, the provision of additional supply may curtail dramatic increases in domestic gas prices. However, the gas deal does not address the core problem, which stems from our enormous commitment to LNG exports and the connection of domestic gas prices to the global energy market.
Indeed, the commitments are so great that many LNG operators have had to take conventional gas from South Australia and Victoria to fulfil their export contracts. This has put significant pressure on domestic prices.
The unequivocal truth is that gas prices were much cheaper before the LNG export boom. The only way to achieve some level of protection for domestic gas prices is to implement stronger regulatory controls on the export market. This should involve taking account of the public interest when assessing whether export restrictions should be imposed.
The ADGSM legislation does not incorporate any explicit public interest test, despite the fact that gas is a public resource in Australia and gas pricing is a strong public interest issue.
Compare that with the United States, where public interest is a key principle in assessing whether to approve any LNG exports to countries with no US free trade agreement (such as Japan). Public interest tests in the United States involve a careful determination of how exports will affect domestic supply and the potential impact that a strong export market will have upon domestic prices.
Read more: Want to boost the domestic gas industry? Put a price on carbon
The Australian government’s decision to broker a deal with gas suppliers, rather than extend the long arm of the law, means that regulators will need to keep a close eye on the gas companies to check that they are holding up their end of the bargain.
That job will fall to the Australian Competition and Consumer Commission (ACCC). ACCC chair Rod Simms this week warned gas suppliers to ensure that their “retail margins are appropriate”.
In the absence of any explicit rules compelling gas producers that signed the deal to provide clear and accurate information and adopt stronger transparency protocols, the ACCC may face a very onerous task.
Samantha Hepburn does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Europe will benefit hugely from keeping global warming to 1.5°C
From heatwaves to intense rainfall and severe cold weather, Europe experiences its fair share of weather extremes.
In an open access study, published in Environmental Research Letters, David Karoly and I have found that without limiting global warming, Europe is likely to see even more severe heat, less frequent extreme cold, and more intense rain events.
The Paris Agreement of December 2015 aims to limit the global temperature increase to “well below 2℃ above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5℃”, so as to “significantly reduce the risks and impacts of climate change”.
Read more: What is a pre-industrial climate and why does it matter?
Our analysis compares temperature and rainfall extremes under the 1.5℃ and 2℃ levels of global warming, with these same events in the current climate (with global warming of just over 1℃) and a pre-industrial climate.
Hotter, and more frequent, heat extremesAs the world warms up, so does Europe, although more in the Mediterranean and the east and less over Scandinavia and the British Isles.
We studied changes in a few different heat events, including hot summers like the record of 2003 in Central Europe. A blocking high pressure pattern led to persistent sunny hot weather across much of the continent, which dried out the region and enhanced the heat. Temperature records tumbled across the continent, with new national records for daily maximum temperatures in France, the UK and other countries. Previous work has already found a clear human fingerprint in both the event itself and the excess deaths associated with the heat.
Our study projects hot summers like 2003 will become more frequent at 1.5℃ and 2℃ of global warming. At 2℃ of global warming, Central European hot summers like 2003 would very likely occur in most years.
Hot European summers like 2003 become more frequent at higher levels of global warming. Bars show best estimates of the chance of an event per year, with the black lines showing 90% confidence intervals. Author providedWe also find an increasing likelihood of events like the recent record hot year in Europe in 2016 and the record hot year in Central England in 2014 under the Paris Agreement’s targeted levels of global warming.
… But fewer, and less intense, cold extremesThe December of 2010 was exceptionally cold across the British Isles, as a lack of weather systems crossing the Atlantic allowed air from the north and the east to frequently cross the region. There was a new cold temperature record for Northern Ireland and persistent cold weather across the UK and Ireland, with long runs of sub-zero days. Heavy snowfall caused widespread disruption for days at a time.
A snowy scene at Worcester Cathedral in December 2010. David KingOur analysis finds that such a cold December was already very unlikely to occur in the current climate, and would be extremely unlikely under either 1.5℃ or 2℃ of global warming. Future cold weather events would still be associated with similar weather patterns, but the background warming in the climate system would make them less intense than in the world of today or under pre-industrial conditions.
When it rains, it poursWe also studied extreme rain events, in particular the heavy rain that led to large-scale flooding in England and Wales in May, June and July of 2007. Low pressure systems passed over the British Isles almost continually for that three-month period, so the rain was falling on already saturated ground. On July 19 and 20 more than 100mm of rain fell on a broad swathe of the English Midlands. This record-breaking rainfall resulted in some of the worst floods in British history.
The River Teme near Worcester, England in flood in July 2007. David KingExtended rainy periods like May-July 2007 are very rare, and not projected to become more frequent at 1.5℃ or 2℃ of global warming.
However, extreme rainfall days like we saw during that period are projected to become both more frequent and more intense in a warmer world. In a 2℃ world we would expect very heavy rain days to be at least 70% more frequent than in the current climate over the UK and Ireland.
Clear benefits to keeping a lid on global warmingMany of the most costly extreme weather events in Europe, in particular extreme heat and intense rainfall events, are projected to become more common, even at the relatively low levels of global warming that are being targeted under the Paris Agreement.
More frequent heat extremes expected as the globe warms up. Best estimates of the likelihood of extreme events are shown (with 90% confidence intervals in parentheses). T means average temperature and R means total rainfall. TXx and TNn mean the hottest daily maximum and coldest daily minimum, respectively, while Rx1day means the wettest single day. Author providedThe worst impacts of these events can be avoided through improving the planning and responses for such events, whether it is increasing support for the elderly in France during summer heatwaves or improving flood protection on major rivers in Britain.
However, limiting global warming to 1.5℃, rather than 2℃ or more, would reduce the frequency with which these extreme event responses would need to be implemented.
Put simply, to prevent a more extreme future for Europe’s weather, we need to keep the lid on global warming.
DisclosureAndrew King receives funding from the ARC Centre of Excellence for Climate System Science.
The oil and gas sector needs to diversify if it wants to prosper
One does not have to look far to see signs that the oil and gas industry has a bumpy road ahead. Demand might stay high for decades, but given the dizzying pace of technological change, who would bet on that?
Take the recent pledges by India, France, Britain, and China to phase out petrol and diesel vehicles. Or the plummeting costs of grid-scale solar power, rapidly becoming cheaper than fossil-fuelled electricity.
These developments should cause oil and gas companies to think very carefully about their next move. Big investments in natural gas globally, made on the assumption that gas is a bridge to a clean energy future, may fall flat because renewables are developing so swiftly.
The fact of the matter is that oil and gas companies need to start planning for a low-carbon future and embrace the opportunities it presents. One approach is to diversify their products and embrace renewable energy, one of four strategies that CSIRO has identified in its industry-led Oil and Gas Roadmap that outlines some of the future directions the industry might take.
Read more: Big oil’s offshore scramble is risky business all round
With 40% of companies involved in the exploration and production of petroleum likely to move away from oil and gas in 2017, solar photovoltaics and energy storage offer alternative avenues in which oil and gas companies can invest.
Renewables can be integrated into operations to reduce both the cost and the carbon intensity of operations. In the longer term, these technologies could help energy companies to develop more sophisticated offerings. For instance, hybrid solar and gas microgrids could be sold to developing nations, allowing them to leapfrog from energy poverty into clean, cheap distributed energy for all, effectively skipping expensive, centralised electricity grid infrastructure.
The Oil and Gas roadmap. Gas-powered shipsTwo more strategic opportunities focus on expanding the potential of the least carbon-intensive fossil fuel: natural gas.
For example, global demand for liquefied natural gas (LNG) for transport is expected to grow fourfold to 100 million tonnes a year by 2030, a prime target being maritime shipping. Meeting this LNG demand could open up a valuable market for Australia.
Another opportunity lies in the creation of higher-value products. Natural gas can be converted to many refined products that can fetch higher margins in the market, including diesel and other chemicals such as methanol and dimethyl ether.
More investment is needed to make conversion technology economically competitive, but it would be a wise investment, especially in light of Australia’s lack of domestic strategic fuel reserves.
Read more: Running on empty: Australia’s risky approach to oil supplies
Hydrogen fuel is another possibility for Australian resource companies. It can be produced from gas, but in the future hydrogen fuel could also be manufactured by solar-powered electrolysis of water. Both would be good options, given Australia’s abundance of gas and sunlight.
Investments will be needed to improve the production and transport economics of hydrogen, including the development of efficient technologies that can convert hydrogen carriers (like ammonia) to hydrogen at the point of use.
Smarter fuel options. CSIRO, Author providedOur roadmap also suggests other ways for companies to get involved in the energy transition, by becoming more efficient, less wasteful, and more productive.
Advanced environmental solutions point to ways to improve water quality and reuse, reduce or eliminate greenhouse gas emissions (including sequestering carbon dioxide, controlling fugitive emissions, and finding alternatives to flaring), and finding the best ways to decommission assets like wells and offshore platforms after their useful life is over.
The industry needs to be much more efficient in exploring and producing oil and gas so that the life of existing assets can be lengthened, often using less environmentally damaging approaches such as waterless fracturing and reservoir rejuvenation using microbes. Robots and artificial intelligence could also help to improve efficiency and safety.
The oil and gas sector has an important role to play in the future of the energy sector, but that role is changing. Companies need to be proactive to remain relevant. If they pursue some of the opportunities outlined here, they will help ensure they stay viable into the future.
Jerad A. Ford has previously received research funding and scholarships from the UQ Centre for Coal Seam Gas while a student and post-doc researcher at the University of Queensland Business School.
Australia's $1 billion loan to Adani is ripe for a High Court challenge
Indian mining giant Adani’s proposal to build Australia’s largest coal mine in Queensland’s Galilee Basin has been the source of sharp national controversy, because of its potential economic, health, environmental and cultural risks.
These concerns were amplified this week when India’s former environment minister Jairam Ramesh told the ABC’s Four Corners:
My message to the Australian government would certainly be: please demonstrate that you have done more homework than has been the case so far.
It’s a valid warning, considering that a Commonwealth investment board is considering loaning Adani A$1 billion in federal money to assist the development of mining infrastructure.
Read more: Adani gives itself the green light, but that doesn’t change the economics of coal
The loan, expected to be announced any day now, will no doubt agitate further political controversy.
It is also likely to pave the way for yet more court challenges against Adani’s proposal.
Why does Adani want Commonwealth money?One of the major questions about Adani’s mine is its financial viability, and its inability to secure private sector funding. Its proponents blame anti-coal campaigners, but arguably more important are the myriad concerns about Adani’s liquidity, its corporate structure and conduct, and the ever-weakening international coal market.
Against this backdrop Adani has requested A$1 billion from the Northern Australia Infrastructure Facility (NAIF), a A$5 billion discretionary government fund set up in 2015 to promote economic development in the country’s north.
The timing and geographical focus of the fund have raised fears it is just a government “slush fund”, set up with Adani’s plans specifically in mind. The federal government initially denied this, with Energy Minister Josh Frydenberg stressing that the mine “needs to stand on its own two feet”.
But shortly after the NAIF Act was passed, Adani’s application was made public, although there remains little available detail about whether or why it will be given the money, or the exact amount.
Loan proceduresNAIF’s board will make the decision, not a government minister. Its processes are shielded from scrutiny by a lack of transparency requirements and consistent blocking of Freedom of Information requests.
As the loan decisions are made by a quasi-corporate board, rather than a minister, it is much harder (if not impossible) to challenge them directly in court. Nor does the NAIF Act provide grounds for review or appeal.
Ultimately, this leaves those who object to Adani receiving Commonwealth money with a very limited avenue of legal challenge. The only option is to argue that the NAIF Act is itself unconstitutional.
Constitutional challengeThe Commonwealth has no direct power to make laws that control or support infrastructure or mining directly. Instead, the NAIF Act seeks to do this indirectly using Section 96 of the Constitution, which states:
During a period of ten years after the establishment of the Commonwealth and thereafter until the Parliament otherwise provides, the Parliament may grant financial assistance to any State on such terms and conditions as the Parliament thinks fit. (emphasis added)
There are two points to note here.
The first is that this granting provision was clearly meant as a transitional measure for the decade immediately following federation, to protect poorer states from bankruptcy while adjusting their economies to a federal model. Note also that the provision was clearly intended to help state governments, not corporations.
The second is the phrase “terms and conditions”, which clearly relates to the repayment of these loans, much like the terms and conditions applied to any banking loan today.
Both of these things were ignored by the early (and somewhat infamous) Engineers High Court from the 1920s to 1950s, which tended to interpret the Constitution in a way that favoured the Commonwealth over the states.
Perhaps most importantly, the court ruled that Section 96 allows the Commonwealth to apply any terms and conditions it likes to the loans, rather than simply setting the terms of repayment. That has meant that states can be compelled to take particular actions – such as accepting national educational standards, building roads or, indeed, infrastructure development – in return for financial assistance. States were also forced to stop collecting income tax in return for federal monies. This resulted in a “vertical fiscal imbalance” which has left the states at the financial mercy of the Commonwealth ever since.
This extremely liberal interpretation of Section 96 has not been legally challenged since the early days of the federation, not least because recipients or potential recipients of money are unlikely to bite the hand that feeds them. But the Adani loan might just change this.
Critics of the use of Section 96 have long hoped for a High Court challenge to its ever-growing use to expand Commonwealth financial influence. The Adani loan may be the right vehicle. Thennicke/WikicommonsAdani’s prospective loan seems clearly inconsistent with the wording of Section 96. Any constitutional challenge against it is likely to be complex and nuanced, but two basic arguments present themselves.
First, the Constitution states that it is the Commonwealth Parliament that must determine both the loan and its conditions. However, the NAIF Act grants these powers to a corporate board, which answers only indirectly to the Parliament.
Second, the Constitution states that it is the state that must receive the loan. But the Queensland Government has stated that it will simply pass the NAIF funding straight to Adani, and that:
Commonwealth’s borrowings for the NAIF project will remain on the Commonwealth’s balance sheet and not on Queensland’s.
This is a highly questionable use of a federal power that was conceived as a way to help states with their financing, rather than private multinational companies.
Note also the apparent bypassing of the Senate in this process. Senators may be likely to bring a legal challenge if they feel that federal money meant to benefit their states is being distributed improperly.
More than just federal money at stakeWhile it is impossible to second-guess the High Court on such a complex matter, its recent decisions indicate a major swing away from unsupervised Commonwealth spending, especially on issues that affect the fiscal balance between the states and Commonwealth. The potential Adani loan certainly seems to fall into that category.
Read more: Why are we still pursuing the Adani Carmichael mine?
Yet as much as Section 96 has been stretched beyond its original intention, it has also been used to support vital and important national enterprises, from education, to social welfare, and indeed national development projects.
With that in mind, the Commonwealth might ultimately come to doubt the wisdom of granting such a vast sum of money to a questionable company. If it leads to more restrictive reading of Section 96 by the High Court, it might significantly limit Canberra’s ability to fund valuable schemes in other areas.
Brendan Gogarty is has provided pro bono (free) legal advice to the Australian Conversation Foundation on the constitutionality of the proposed Adani Loan. The advice was provided in a voluntary capacity in his role as a community legal practitioner.
Why are we still pursuing the Adani Carmichael mine?
Why, if Adani’s gigantic Carmichael coal project is so on-the-nose for the banks and so environmentally destructive, are the federal and Queensland governments so avid in their support of it?
Once again the absurdity of building the world’s biggest new thermal coal mine was put in stark relief on Monday evening via an ABC Four Corners investigation, Digging into Adani.
Read more: Adani gives itself the green light, but that doesn’t change the economics of coal
Where the ABC broke new ground was in exposing the sheer breadth of corruption by this Indian energy conglomerate. And its power too. The TV crew was detained and questioned in an Indian hotel for five hours by police.
It has long been the subject of high controversy that the Australian government, via the Northern Australia Infrastructure Facility (NAIF)that is still contemplating a A$1 billion subsidy for Adani’s rail line, a proposal to freight the coal from the Galilee Basin to Adani’s port at Abbot Point on the Great Barrier Reef.
But more alarming still, and Four Corners touched on this, is that the federal government is also considering using taxpayer money to finance the mine itself, not just the railway.
No investors in sightAs private banks have walked away from the project, the only way Carmichael can get finance is with the government providing guarantees to a private banking syndicate, effectively putting taxpayers on the hook for billions of dollars in project finance.
The prospect is met with the same incredulity in India as it is here in Australia:
FOUR CORNERS: “Watching on from Delhi, India’s former Environment Minister can’t believe what he is seeing.”
JAIRAM RAMESH: “Ultimately, it’s the sovereign decision of the Australian Government, the federal government and the state government.
FOUR CORNERS: "But public money is involved, and more than public money, natural resources are involved.
JAIRAM RAMESH: "I’m very, very surprised that the Australian government, uh, for whatever reason, uh, has uh, seen it fit, uh, to all along handhold Mr Adani.”
Here we have a project that does not stack up financially, and whose profits - should it make any - are destined for tax haven entities controlled privately by Adani family interests. Yet the Queensland government has shocked local farmers and environmentalists by gifting Adani extremely generous water rights, and royalties concessions to boot.
Why are Australian governments still in support?The most plausible explanation is simply politics and political donations. There is no real-time disclosure of donations and it is relatively easy to disguise them, as there is no disclosure of the financial accounts of state and federal political parties either. Payments can be routed through opaque foundations, the various state organisations, and other vehicles.
Many Adani observers believe there must be money involved, so strident is the support for so unfeasible a project. The rich track record of Adani bribing officials in India, as detailed by Four Corners, certainly points that way. But there is little evidence of it.
In the absence of proof of any significant financial incentives however, the most compelling explanation is that neither of the major parties is prepared to be “wedged” on jobs, accused of being anti-business or anti-Queensand.
There are votes in Queensland’s north at stake. Furthermore, the fingerprints of Adani’s lobbyists are everywhere.
Adani lobbyist and Bill Shorten’s former chief of staff Cameron Milner helped run the re-election campaign of Premier Annastacia Palaszczuk. This support, according to The Australian, has been given free of charge:
Mr Milner is volunteering with the ALP while keeping his day job as director and registered lobbyist at Next Level Strategic Services, which counts among its clients Indian miner Adani…
The former ALP state secretary held meetings in April and May with Ms Palaszczuk and her chief of staff David Barbagallo to negotiate a government royalties deal for Adani, after a cabinet factional revolt threatened the state’s largest mining project.
Adani therefore enjoys support and influence on both sides of politics. “Next Level Strategic Services co-director David Moore — an LNP stalwart who was Mr Newman’s chief of staff during his successful 2012 election campaign — is also expected to volunteer with the LNP campaign.”
So it is that Premier Palaszczuk persists with discredited claims that Carmichael will produce 10,000 jobs when Adani itself conceded in a court case two years ago the real jobs number would be but a fraction of that.
If the economics don’t stack up, why is Adani still pursuing the project?The Adani group totes an enormous debt load, the seaborne thermal coal market is in structural decline as new solar capacity is now cheaper to build than new coal-fired power plants and the the government of India is committed to phasing out coal imports in the next three years.
Why flood the market with 60 million tonnes a year in new supply and further depress the price of one of this country’s key export commodities?
The answer to this question lies in the byzantine structure of the Adani companies themselves. Adani already owns the terminal at Abbot Point and it needs throughput to make it financially viable.
Both the financial structures behind the port and the proposed railway are ultimately controlled in tax havens: the Cayman Islands, the British Virgin Islands and Singapore. Even if Adani Mining and its related Indian entities upstream, Adani Enterprises and Adani Power, lose money on Carmichael, the Adani family would still benefit.
Read more: Australia’s $1 billion loan to Adani is ripe for High Court challenge
The port and rail facilities merely “clip the ticket” on the volume of coal which goes through them. The Adani family then still profits from the privately-controlled infrastructure, via tax havens, while shareholders on the Indian share market shoulder the likely losses from the project.
As the man who used to be India’s most powerful energy bureaucrat, E.A.S. Sharma, told the ABC: “My assessment is that by the time the Adani coal leaves the Australian coast the cost of it will be roughly about A$90 per tonne.
"We cannot afford that, it is so expensive.”
More questions than answers remainThis renders the whole project even more bizarre. Why would the government put Australian taxpayers on the hook for a project likely to lose billions of dollars when the only clear beneficiaries are the family of Indian billionaire Gautam Adani and his Caribbean tax havens.
My view is that this project is a white elephant and will not proceed. Given the commitment by our elected leaders however, it may be that some huge holes in the earth may still be dug before it falls apart.
Michael West does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Australian household electricity prices may be 25% higher than official reports
The International Energy Agency (IEA) may be underestimating Australian household energy bills by 25% because of a lack of accurate data from the federal government.
The Paris-based IEA produces official quarterly energy statistics for the 30 member nations of the Organisation for Economic Cooperation and Development (OECD), on which policymakers and researchers rely heavily. But to provide this service, the IEA relies on member countries to provide it with good-quality data.
Last month, the agency published its annual summary report, Key World Statistics, which reported that Australian households have the 11th most expensive electricity prices in the OECD.
Read more: FactCheck: Are Australians paying twice as much for electricity as Americans?
But other studies – notably the Thwaites report into Victorian energy prices – have reported that households are typically paying significantly more than the official estimates. In fact, if South Australia were a country it would have the highest energy prices in the OECD, and typical households in New South Wales, Queensland or Victoria would be in the top five.
A spokesperson for the federal Department of Environment and Energy, the agency responsible for providing electricity price data to the IEA, told The Conversation:
Household electricity prices data for Australia are sourced from the Australian Energy Market Commission annual Residential electricity price trends report. The national average price is used, with GST added. It is a weighted average based on the number of household connections in each jurisdiction.
The Australian energy statistics are the basis for the Australia data reported by the IEA in their Key world energy statistics. The Department of the Environment and Energy submits the data to the IEA each September. Some adjustments are made to the AES data to conform with IEA reporting requirements.
But it is clear that the electricity price data for Australia published by the IEA is at least occasionally of poor quality.
The Australian household electricity series in the IEA’s authoritative Energy Prices and Taxes quarterly statistical report stopped in 2004, and only resumed again again in 2012.
Between 2012 and 2016, the IEA’s reported residential price series data for Australia showed no change in prices.
Yet the Australian Bureau of Statistics’ electricity price index, which is based on customer surveys, showed a roughly 20% increase in the All Australia electricity price index over this period.
Australia is also the only OECD nation not to report electricity prices paid by industry.
Current pricesThis year’s reported household average electricity prices are almost certainly wrong too. The IEA reports that household electricity prices in Australia for the first quarter of 2017 were US20.2c per kWh.
At a market exchange rate of US79c to the Australian dollar, this puts Australian household electricity prices at AU28c per kWh. Adjusted for the purchasing power of each currency, the comparable price is AU29c per kWh.
By contrast, the independent review of the Victorian energy sector chaired by John Thwaites surveyed the real energy prices paid by customers, as evidenced by their bills. In a sample of 686 Victorian households, those with energy consumption close to the median value were paying an average of AU35c per kWh in the first quarter of 2017. This is 25% more than the IEA’s official estimate. At least part of this difference is explained by the AEMC’s assumption that all customers in a competitive retail market are supplied on their retailers’ cheapest offers. But this is not the case in reality.
Surveying real electricity and gas bills drastically reduces the range of assumptions that need to be made to estimate the price paid by a representative customer. Indeed, as long as the sample of bills is representative of the population, a survey based on actual bills produces a reliable estimate of representative prices in retail markets characterised by high levels of price dispersion, as Australia’s retail electricity markets are.
Read more: Baffled by baseload? Dumbfounded by dispatchables? Here’s a glossary of the energy debate
Pointing to a reliable estimate of Victoria’s representative residential price is, of course, not enough to prove that the IEA’s estimate is wrong. It could just as easily mean that Victorians are paying way more than the national average for their electricity.
But the idea that Victorians are paying more than average does not stack up when we look at the state-by-state data, which suggests that Victoria is actually somewhere in the middle. Judging by the prices charged by the three largest retailers in each state and territory, Victorian householders are paying about the same as those in New South Wales and Queensland, less than those in South Australia, and more than those in Tasmania, the Northern Territory, Western Australia and the Australian Capital Territory.
Residential electricity prices. Author providedThe IEA can not reasonably be blamed for the inadequate residential data for Australia that they report, and the nonexistent data on electricity prices paid by Australia’s industrial customers. The IEA does not do its own calculation of prices in each country, but rather it relies on price estimates from official sources in those countries.
An obvious question that arises from this is where Australia really ranks internationally if we used prices that reflect what households are actually paying.
This is contentious, not least because prices in New South Wales, Queensland and South Australia increased – typically around 15% or more – from July this year. We do not know how prices have changed in other OECD member countries since the IEA’s recent publication (which covered prices for the first quarter of 2017). But we do know that prices in Australia have been far more volatile than in any other OECD country.
Assuming that other countries’ prices are roughly the same as they were in the first quarter of 2017, our estimate using the IEA’s data is that the typical household in South Australia is paying more than the typical household in any other OECD country. The typical household in New South Wales, Queensland or Victoria is paying a price that ranks in the top five.
It should also be remembered that these prices are after excise and sale tax. Taxes on electricity supply in Australia are low by OECD standards – so if we use pre-tax prices, Australian households move even higher up the list.
There are serious question marks over Australia’s official electricity price reporting. Policy makers, consumers and the public have a right to expect better.
Bruce Mountain is a cofounder of MarkIntell, which is owned and operated by Carbon and Energy Markets and provides energy retail market data (including data used in this article) for use by regulators and governments in Australia.
For whom the bell tolls: cats kill more than a million Australian birds every day
Cats kill more than a million birds every day across Australia, according to our new estimate – the first robust attempt to quantify the problem on a nationwide scale.
By combining data on the cat population, hunting rates and spatial distribution, we calculate that they kill 377 million birds a year. Rates are highest in Australia’s dry interior, suggesting that feral cats pose a serious and largely unseen threat to native bird species.
Read more: Ferals, strays, pets: how to control the cats that are eating our wildlife
This has been a contentious issue for more than 100 years, since the spread of feral cats encompassed the entire Australian mainland. In 1906 the ornithologist A.J. Campbell noted that the arrival of feral cats in a location often immediately preceded the decline of many native bird species, and he campaigned vigorously for action:
Undoubtedly, if many of our highly interesting and beautiful birds, especially ground-loving species, are to be preserved from total extinction, we must as a bird-lovers’ union, at no distant date face squarely a wildcat destruction scheme.
His call produced little response, and there has been no successful and enduring reduction in cat numbers since. Nor, until now, has there been a concerted effort to find out exactly how many birds are being killed by cats.
Counting the costTo provide a first national assessment of the toll taken by cats on Australian birds, we have compiled almost 100 studies detailing the diets of Australia’s feral cats. The results show that the average feral cat eats about two birds every five days.
We then combined these statistics with information about the population density of feral cats, to create a map of the estimated rates of birds killed by cats throughout Australia.
Number of birds eaten per square kilometre. Brett Murphy, Author providedWe conclude that, on average, feral cats in Australia’s largely natural landscapes kill 272 million birds per year. Bird-kill rates are highest in arid Australia (up to 330 birds per square km per year) and on islands, where rates can vary greatly depending on size.
We also estimate (albeit with fewer data) that feral cats in human-modified landscapes, such as the areas surrounding cities, kill a further 44 million birds each year. Pet cats, meanwhile, kill about 61 million birds per year.
Overall, this amounts to more than 377 million birds killed by cats per year in Australia – more than a million every day.
Which species are suffering?In a related study, we also compiled records of the bird species being killed by cats in Australia. We found records of cats killing more than 330 native bird species – about half of all Australia’s resident bird species. In natural and remote landscapes, 99% of the cat-killed birds are native species. Our results also show that cats are known to kill 71 of Australia’s 117 threatened bird species.
Birds that feed or nest on the ground, live on islands, and are medium-sized (60-300g) are most likely to be killed by cats.
Galahs are among the many native species being killed by feral cats. Mark Marathon, Author providedIt is difficult to put a million-plus daily bird deaths in context without a reliable estimate of the total number of birds in Australia. But our coarse assessment from many published estimates of local bird density suggests that there are about 11 billion land birds in Australia, suggesting that cats kill about 3-4% of Australia’s birds each year.
However, particular species are hit much harder than others, and the population viability of some species (such as quail-thrushes, button-quails and ground-feeding pigeons and doves) is likely to be especially threatened.
Our tally of bird deaths is comparable to similar estimates for other countries. Our figure is lower than a recent estimate for the United States, and slightly higher than in Canada. Overall, bird killings by cats seem to greatly outnumber those caused by humans.
In Australia, cats are likely to significantly increase the extinction risk faced by some bird species. In many locations, birds face a range of interacting threats, with cat abundance and hunting success shown to increase in fragmented bushland, in areas with high stocking rates, and in places with poorly managed fire regimes, so cat impacts compound these other threats.
Belling the catWhat can be done to reduce the impact? The federal government’s Threatened Species Strategy recognises the threat posed by feral cats, albeit mainly on the basis of their role in mammal extinctions.
The threatened species strategy also prioritised efforts to control feral cats more intensively, eradicate them from islands with important biodiversity values, and to expand a national network of fenced areas that excludes feral cats and foxes.
But while fences can create important havens for many threatened mammals, they are much less effective for protecting birds. To save birds, cats will need to be controlled on a much broader scale.
Read more: The war on feral cats will need many different weapons
We should also remember that this is not just a remote bush problem. Roughly half of Australia’s cats are pets, and they also take a considerable toll on wildlife.
While recognising the many benefits of pet ownership, we should also work to reduce the detrimental impacts. Fortunately, there is increasing public awareness of the benefits of not letting pet cats roam freely. With such measures, cat owners can help to look after the birds in their own backyards, and hence contribute to conserving Australia’s unique wildlife.
We acknowledge the contribution of Russell Palmer (WA Department of Biodiversity Conservation and Attractions), Chris Dickman (University of Sydney), David Paton (University of Adelaide), Alex Nankivell (Nature Foundation SA Inc.), Mike Lawes (University of KwaZulu-Natal), and Glenn Edwards (Department of Environment and Natural Resources) to this article.
John Woinarski has received funding from the Australian government's National Environmental Science Programme (Threatened Speices Recovery Hub).
Brett Murphy has received funding from the Australian government's National Environmental Science Programme (Threatened Speices Recovery Hub).
Leigh-Ann Woolley has received funding from the Australian government's National Environmental Science Programme (Threatened Speices Recovery Hub).
Sarah Legge has received funding from the Australian government's National Environmental Science Programme (Threatened Speices Recovery Hub).
Stephen Garnett has received funding from the Australian government's National Environmental Science Programme (Threatened Speices Recovery Hub).
Tim Doherty receives funding from the Hermon Slade Foundation, Australian Academy of Sciences and Ecological Society of Australia. He is a board member of the Society for Conservation Biology Oceania and member of the Ecological Society of Australia and Australian Mammal Society.
Magpies can form friendships with people – here's how
Can one form a friendship with a magpie? –even when adult males are protecting their nests during the swooping season? The short answer is:“ Yes, one can” - although science has just begun to provide feasible explanations for friendship in animals, let alone for cross-species friendships between humans and wild birds.
Ravens and magpies are known to form powerful allegiances among themselves. In fact, Australia is thought to be a hotspot for cooperative behaviour in birds worldwide. They like to stick together with family and mates, in the good Australian way.
Read more: In defence of magpies: the bird world’s bad boy is simply misunderstood
Of course, many bird species may readily come to a feeding table and become tame enough to take food from our hand, but this isn’t really “friendship”. However, there is evidence that, remarkably, free-living magpies can forge lasting relationships with people, even without depending on us for food or shelter.
When magpies are permanently ensconced on human property, they are also far less likely to swoop the people who live there. Over 80% of all successfully breeding magpies live near human houses, which means the vast majority of people, in fact, never get swooped. And since magpies can live between 25 and 30 years and are territorial, they can develop lifelong friendships with humans. This bond can extend to trusting certain people around their offspring.
A key reason why friendships with magpies are possible is that we now know that magpies are able to recognise and remember individual human faces for many years. They can learn which nearby humans do not constitute a risk. They will remember someone who was good to them; equally, they remember negative encounters.
Why become friends?Magpies that actively form friendships with people make this investment (from their point of view) for good reason. Properties suitable for magpies are hard to come by and the competition is fierce. Most magpies will not secure a territory – let alone breed – until they are at least five years old. In fact, only about 14% of adult magpies ever succeed in breeding. And based on extensive magpie population research conducted by R. Carrick in the 1970s, even if they breed successfully every single year, they may successfully raise only seven to eleven chicks to adulthood and breeding in a lifetime. There is a lot at stake with every magpie clutch.
Read more: Bird-brained and brilliant: Australia’s avians are smarter than you think
The difference between simply not swooping someone and a real friendship manifests in several ways. When magpies have formed an attachment they will often show their trust, for example, by formally introducing their offspring. They may allow their chicks to play near people, not fly away when a resident human is approaching, and actually approach or roost near a human.
In rare cases, they may even join in human activity. For example, magpies have helped me garden by walking in parallel to my weeding activity and displacing soil as I did. One magpie always perched on my kitchen window sill, looking in and watching my every move.
The curious magpie following the author’s movements in her home (Photo by G.Kaplan no reuse)
On one extraordinary occasion, an adult female magpie gingerly entered my house on foot, and hopped over to my desk where I was sitting. She watched me type on the keyboard and even looked at the screen. I had to get up to take a phone call and when I returned, the magpie had taken up a position at my keyboard, pecked the keys gently and then looked at the “results” on screen.
The bird was curious about everything I did. She also wanted to play with me and found my shoelaces particularly attractive, pulling them and then running away a little only to return for another go.
Importantly, it was the bird (not hand-raised but a free-living adult female) that had begun to take the initiative and had chosen to socially interact and such behaviour, as research has shown particularly in primates, is affiliative and part of the basis of social bonds and friendships.
Risky businessIf magpies can be so good with humans how can one explain their swooping at people (even if it is only for a few weeks in the year)? It’s worth bearing in mind that swooping magpies (invariably males on guard duty) do not act in aggression or anger but as nest defenders. The strategy they choose is based on risk assessment.
A risk is posed by someone who is unknown and was not present at the time of nest building, which unfortunately is often the case in public places and parks. That person is then classified as a territorial intruder and thus a potential risk to its brood. At this point the male guarding the brooding female is obliged to perform a warning swoop, literally asking a person to step away from the nest area.
If warnings are ignored, the adult male may try to conduct a near contact swoop aimed at the head (the magpie can break its own neck if it makes contact, so it is a strategy of last resort only). Magpie swooping is generally a defensive action taken when someone unknown approaches who the magpie believes intends harm. It is not an arbitrary attack.
Fearless magpie in pursuit of larger and dangerous brown goshawk keeping themselves and other. species safe (Photo by G Kaplan- no reuse) When I was swooped for the first time in a public place I slowly walked over to the other side of the road. Importantly, I allowed the male to study my face and appearance from a safe distance so he could remember me in future, a useful strategy since we now know that magpies remember human faces. Taking a piece of mince or taking a wide berth around the magpies nest may eventually convince the nervous magpie that he does not need to deter this individual anymore because she or he poses little or no risk, and who knows, may even become a friend in future.
A sure way of escalating conflict is to fence them with an umbrella or any other device, or to run away at high speed. This human approach may well confirm for the magpie that the person concerned is dangerous and needs to be fought with every available strategy.
In dealing with magpies, as in global politics, de-escalating a perceived conflict is usually the best strategy.
Gisela Kaplan received funding from the ARC in the past for field research on free-ranging magpies .
How to work out which coral reefs will bleach, and which might be spared
Regional variations in sea surface temperature, related to seasons and El Niño, could be crucial for the survival of coral reefs, according to our new research. This suggests that we should be able to identify the reefs most at risk of mass bleaching, and those that are more likely to survive unscathed.
Healthy coral reefs support diverse ecosystems, hosting 25% of all marine fish species. They provide food, coastal protection and livelihoods for at least 500 million people.
But global warming, coupled with other pressures such as nutrient and sediment input, changes in sea level, waves, storms, ventilation, hydrodynamics, and ocean acidification, could lead to the end of the world’s coral reefs in a couple of decades.
Read more: How much coral has died in the Great Barrier Reef’s worst bleaching event?
Climate warming is the major cause of stress for corals. The world just witnessed an event described as the “longest global coral die-off on record”, and scientists have been raising the alarm about coral bleaching for decades.
The first global-scale mass bleaching event happened in 1998, destroying 16% of the world coral reefs. Unless greenhouse emissions are drastically reduced, the question is no longer if coral bleaching will happen again, but when and how often?
To help protect coral reefs and their ecosystems, effective management and conservation strategies are crucial. Our research shows that understanding the relationship between natural variations of sea temperature and human-driven ocean warming will help us identify the areas that are most at risk, and also those that are best placed to provide safe haven.
A recurrent threatBleaching happens when sea temperatures are unusually high, causing the corals to expel the coloured algae that live within their tissues. Without these algae, corals are unable to reproduce or to build their skeletons properly, and can ultimately die.
The two most devastating global mass bleaching events on record – in 1998 and 2016 – were both triggered by El Niño. But when water temperatures drop back to normal, corals can often recover.
Certain types of coral can also acclimatise to rising sea temperatures. But as our planet warms, periods of bleaching risk will become more frequent and more severe. As a consequence, corals will have less and less time to recover between bleaching events.
We are already witnessing a decline in coral reefs. Global populations have declined by 1-2% per year in response to repeated bleaching events. Closer to home, the Great Barrier Reef lost 50% of its coral cover between 1985 and 2012.
A non-uniform response to warmingWhile the future of worldwide coral reefs looks dim, not all reefs will be at risk of recurrent bleaching at the same time. In particular, reefs located south of 15ºS (including the Great Barrier Reef, as well as islands in south Polynesia and Melanesia) are likely to be the last regions to be affected by harmful recurrent bleaching.
We used to think that Micronesia’s reefs would be among the first to die off, because the climate is warming faster there than in many other places. But our research, published today in Nature Climate Change, shows that the overall increase in temperature is not the only factor that affects coral bleaching response.
In fact, the key determinant of recurrent bleaching is the natural variability of ocean temperature. Under warming, temperature variations associated with seasons and climate processes like El Niño influence the pace of recurrent bleaching, and explain why some reefs will experience bleaching risk sooner than others in the future.
Different zones of the Pacific are likely to experience differing amounts of climate variability. Author provided Degrees of future bleaching risk for corals in the three main Pacific zones. Author providedOur results suggest that El Niño events will continue to be the major drivers of mass bleaching events in the central Pacific. As average ocean temperatures rise, even mild El Niño events will have the potential to trigger widespread bleaching, meaning that these regions could face severe bleaching every three to five years within just a few decades. In contrast, only the strongest El Niño events will cause mass bleaching in the South Pacific.
In the future, the risk of recurrent bleaching will be more seasonally driven in the South Pacific. Once the global warming signal pushes summer temperatures to dangerously warm levels, the coral reefs will experience bleaching events every summers. In the western Pacific, the absence of natural variations of temperatures initially protects the coral reefs, but only a small warming increase can rapidly transition the coral reefs from a safe haven to a permanent bleaching situation.
Read more: Feeling helpless about the Great Barrier Reef? Here’s one way you can help
One consequence is that, for future projections of coral bleaching risk, the global warming rate is important but the details of the regional warming are not so much. The absence of consensus about regional patterns of warming across climate models is therefore less of an obstacle than previously thought, because globally averaged warming provided by climate models combined with locally observed sea temperature variations will give us better projections anyway.
Understanding the regional differences can help reef managers identify the reef areas that are at high risk of recurring bleaching events, and which ones are potential temporary safe havens. This can buy us valuable time in the battle to protect the world’s corals.
Clothilde Emilie Langlais was funded by the Pacific Australian Climate Science and Adaptation Program funded by AusAid.
Scott Heron receives funding and support from the U.S. National Oceanic and Atmospheric Administration's Satellites division (NESDIS) and Coral Reef Conservation Program, and is affiliated with James Cook University. The contents in this piece are solely the opinions of the authors and do not constitute a statement of policy, decision or position on behalf of NOAA or the U.S. Government.
Andrew Lenton does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Mercury from the northern hemisphere is ending up in Australia
Mercury pollution has a long legacy in the environment. Once released into the air, it can cycle between the atmosphere and ecosystems for years or even decades before ending up deep in the oceans or land.
The amount of mercury in the ocean today is about six times higher than it was before humans began to release it by mining. Even if we stopped all human mercury emissions now, ocean mercury would only decline by about half by 2100.
To address the global and long-lasting mercury problem, a new United Nations treaty called the Minamata Convention on Mercury came into effect last month. The treaty commits participating countries to limit the release of mercury and monitor the impacts on the environment. Australia signed the Convention in 2013 and is now considering ratification.
Read more: Why won’t Australia ratify an international deal to cut mercury pollution?
Until now, we have only been able to guess how much mercury might be in the air over tropical Australia. Our new research, published in the journal Atmospheric Chemistry and Physics, shows that there is less mercury in the Australian tropics than in the northern hemisphere – but that polluted northern hemisphere air occasionally comes to us.
A global problemWhile most of mercury’s health risks come from its accumulation in ocean food webs, its main entry point into the environment is through the atmosphere. Mercury in air comes from both natural sources and human activities, including mining and burning coal. One of the biggest mercury sources is small-scale gold mining – a trade that employs millions of people in developing countries but poses serious risks to human health and the environment.
Small-scale gold mining is an economic mainstay for millions of people, but it releases mercury directly into the air and water sources.Once released to the air, mercury can travel thousands of kilometres to end up in ecosystems far away from the original source.
Measuring mercury in the tropicsWhile the United Nations was gathering signatures for the Minamata Convention, we were busy measuring mercury at the Australian Tropical Atmospheric Research Station near Darwin. Our two years of measurements are the first in tropical Australia. They are also the only tropical mercury measurements anywhere in the Maritime Continent region covering southeast Asia, Indonesia, and northern Australia.
We found that mercury concentrations in the air above northern Australia are 30-40% lower than in the northern hemisphere. This makes sense; most of the world’s population lives north of the Equator, so most human-driven emissions are there too.
More surprising is the seasonal pattern in the data. There is more mercury in the air during the dry season than the wet season.
The Australian monsoon appears to be partly responsible for the seasonal change. The amount of mercury jumps up sharply at the start of the dry season when the winds shift from blowing over the ocean to blowing over the land.
In the dry season the air passes over the Australian continent before arriving at the site, while in the wet season the air usually comes from over the ocean to the west of Darwin. Howard et al., 2017 (modified)But wind direction can’t explain the whole story. Mercury is likely being removed from the air by the intense rains that characterise the wet season. In other words, the lower mercury in the air during the wet season may mean more mercury is being deposited to the ocean and the land at this time of year. Unfortunately, there simply isn’t enough information from Australian ecosystems to know how this impacts local plants and wildlife.
Fires also play a role. Mercury previously absorbed by grasses and trees can be released back to the atmosphere when the vegetation burns. In our data, we see occasional large mercury spikes associated with dry season fires. As we move into a bushfire season predicted to be unusually severe, we may see even more of these spikes.
Air from the northAlthough mercury levels were usually low in the wet season, on a few days each year the mercury jumped up dramatically.
To figure out where these spikes were coming from, we used two different models. These models combine our understanding of atmospheric physics with real observations of wind and other meteorological parameters.
Both models point to the same source: air transported from the north.
Australia is usually shielded from northern hemispheric air by a “chemical equator” that stops air from mixing. This barrier isn’t static – it moves north and south throughout the year as the position of the sun changes.
A few times a year, the chemical equator moves so far south that the top end of Australia actually falls within the atmospheric northern hemisphere. When this happens, polluted northern hemisphere air can flow directly to tropical Australia.
We observed 13 days when our measurement site near Darwin sampled more northern hemisphere air than southern hemisphere air. On each of these days, the amount of mercury in the air was much higher than on the days before or after.
Tracing the air backwards in time showed that the high-mercury air travelled over the Indonesian archipelago before arriving in Australia. We don’t yet know whether that mercury came from pollution, fires, or a mix of the two.
The highest mercury is observed when the air comes from the northern hemisphere. Howard et al., 2017 (modified) A global solutionTo effectively reduce mercury exposure in sensitive ecosystems and seafood-dependent populations around the world, aggressive global action is necessary.
The cross-boundary influences on mercury that we have observed in northern Australia highlight the need for the type of multinational collaboration that the Minamata Convention will foster.
Our new data establish a baseline for monitoring the effectiveness of new actions taken under the Minamata Convention. With the first Conference of the Parties having taken place last week, hopefully it will only be a matter of time before we begin to see the benefit.
Jenny Fisher receives funding from the Australian Research Council and the L'Oréal-UNESCO For Women in Science Fellowship program.
Peter Nelson received funding from the Commonwealth Department of Environment, Water, Heritage and the Arts. He is co-lead of the UN Environment Partnership on Mercury Control from Coal Combustion.
Dean Howard and Grant C Edwards do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
From feral camels to 'cocaine hippos', large animals are rewilding the world
Throughout history, humans have taken plants and animals with them as they travelled the world. Those that survived the journey to establish populations in the diaspora have found new opportunities as they integrate into new ecosystems.
These immigrant populations have come to be regarded as “invaders” and “aliens” that threaten pristine nature. But for many species, migration may just be a way to survive the global extinction crisis.
In our recently published study, we found that one of the Earth’s most imperilled group of species is hanging on in part thanks to introduced populations.
Megafauna - plant-eating terrestrial mammals weighing more than 100kg - have established in new and unexpected places. These “feral” populations are rewilding the world with unique and fascinating ecological functions that had been lost for thousands of years.
Today’s world of giants is only a shadow of its former glory. Around 50,000 years ago, giant kangaroos, rhino-like diprotodons, and other unimaginable animals were lost from Australia.
Read more: Giant marsupials once migrated across an Australian Ice Age landscape
Later, around 12,000 years ago, the last of the mammoths, glyptodonts, several species of horses and camels, house-sized ground sloths and other great beasts vanished from North America.
In New Zealand, a mere 800 years ago, a riot of giant flightless birds still grazed and browsed the landscape.
The loss of Earth’s largest terrestrial animals at the end of the Pleistocene was most likely caused by humans.
Sadly, even those large beasts that survived that collapse are now being lost, with 60% of today’s megafauna threatened with extinction. This threat is leading to international calls for urgent intervention to save the last of Earth’s giants.
A wilder world than we thinkFormal conservation distribution maps show that much of Earth is empty of megafauna. But this is only a part of the picture.
Many megafauna are now found outside their historic native ranges. In fact, thanks to introduced populations, regional megafauna species richness is substantially higher today than at any other time during the past 10,000 years.
Megafauna have expanded beyond their historic native range to rewild the world. Number of megafauna per region, in their ‘native’ range only (a) and in their full range (b) Modified and reproduced from Lundgren et al. 2017Worldwide introductions have increased the number of megafauna by 11% in Africa and Asia, by 33% in Europe, by 57% in North America, by 62% in South America, and by 100% in Australia.
Australia lost all of its native megafauna tens of thousands of years ago, but today has eight introduced megafauna species, including the world’s only wild population of dromedary camels.
Australia lost all of its native megafauna tens of thousands of years ago, but is now home to eight introduced species, including the world’s only population of wild dromedary camels. Remote camera trap footage from our research program shows wild brumbies, wild donkeys and wild camels sharing water sources with Australian dingoes, emus and bustards in the deserts of South Australia.These immigrant megafauna have found critical sanctuary. Overall, 64% of introduced megafauna species are either threatened, extinct, or declining in their native ranges.
Some megafauna have survived thanks to domestication and subsequent “feralisation”, forming a bridge between the wild pre-agricultural landscapes of the early Holocene almost 10,000 years ago, to the wild post-industrial ecosystems of the Anthropocene today.
Wild cattle, for example, are descendants of the extinct aurochs. Meanwhile, the wild camels of Australia have brought back a species extinct in the wild for thousands of years. Likewise, the vast majority of the world’s wild horses and wild donkeys are feral.
There have been global calls to rewild the world, but rewilding has already been happening, often with little intention and in unexpected ways.
A small population of wild hippopotamuses has recently established in South America. The nicknamed “cocaine hippos” are the offspring of animals who escaped the abandoned hacienda of Colombian drug lord Pablo Escobar.
Colombia’s growing ‘cocaine hippo’ population is descended from animals kept at Pablo Escobar’s hacienda.By insisting that only idealised pre-human ecosystems are worth conserving, we overlook the fact that these emerging new forms of wilderness are not only common but critical to the survival of many existing ecosystems.
Vital functionsMegafauna are Earth’s tree-breakers, wood-eaters, hole-diggers, trailblazers, wallowers, nutrient-movers, and seed-carriers. By consuming coarse, fibrous plant matter they drive nutrient cycles that enrich soils, restructure plant communities, and help other species to survive.
The wide wanderings of megafauna move nutrients uphill that would otherwise wash downstream and into the oceans. These animals can be thought of as “nutrient pumps” that help maintain soil fertility. Megafauna also sustain communities of scavengers and predators.
In North America, we have found that introduced wild donkeys, locally known as “burros”, dig wells more than a metre deep to reach groundwater. At least 31 species use these wells, and in certain conditions they become nurseries for germinating trees.
Introduced wild donkeys (burros) are engineering the Sonoran Desert, United States.The removal of donkeys and other introduced megafauna to protect desert springs in North America and Australia seems to have led to an exuberant growth of wetland vegetation that constricted open water habitat, dried some springs, and ultimately resulted in the extinction of native fish. Ironically, land managers now simulate megafauna by manually removing vegetation.
It is likely that introduced megafauna are doing much more that remains unknown because we have yet to accept these organisms as having ecological value.
Living in a feral worldLike any other species, the presence of megafauna benefits some species while challenging others. Introduced megafauna can put huge pressure on plant communities, but this is also true of native megafauna.
Whether we consider the ecological roles of introduced species like burros and brumbies as desirable or not depends primarily on our own values. But one thing is certain: no species operates in isolation.
Although megafauna are very large, predators can have significant influence on them. In Australia, dingo packs act cooperatively to hunt wild donkeys, wild horses, wild water buffalo and wild boar. In North America, mountain lions have been shown to limit populations of wild horses in some areas of Nevada.
Visions of protected dingoes hunting introduced donkeys and Sambar deer in Australia, or protected wolves hunting introduced Oryx and horses in the American West, can give us a new perspective on conserving both native and introduced species.
Nature doesn’t stand still. Dispensing with visions of historic wilderness, and the associated brutal measures usually applied to enforce those ideals, and focusing on the wilderness that exists is both pragmatic and optimistic.
After all, in this age of mass extinction, are not all species worth conserving?
This research will be presented at the 2017 International Compassionate Conservation Conference in Sydney.
Daniel Ramp is the Director of the Centre for Compassionate Conservation (CfCC) at UTS. The CfCC collaborates with, and receives research grants from, a range of government, industry, and NGOs to work on conservation actions that address conservation problems and promote compassion for wild animals through peaceful coexistence. He is a Director of Voiceless, which is a funding partner of the Centre.
Arian Wallach, Erick Lundgren, and William Ripple do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
I've always wondered: Why don't hippos get cholera?
This is an article from I Have Always Wondered, a series where readers send in questions they’d like an expert to answer. Send your question to alwayswondered@theconversation.edu.au
Why don’t hippopotamuses get cholera? Why are some animals resistant to waterborne disease? – Phil Morey
The short answer is that cholera has evolved to infect humans, not hippos. Cholera is a disease caused by a curved rod-shaped bacterium called Vibrio cholerae. The disease is characterised by a profuse diarrhoea that resembles “rice water”, and can lead to death within hours.
Transmission electron microsope image of Vibrio cholerae that has been negatively stained. Dartmouth Electron Microscope Facility via WikipediaHumans contract the disease from water contaminated with human sewage containing the bacteria. As cholera is a waterborne disease, it is prevalent in areas where human sanitation is lacking or less than ideal. Unlike many other diseases, it can’t be passed to us from animals, as malaria is from mosquitoes.
Once ingested by humans, the bacteria attach to the small intestine wall. There they reproduce, and prodcue a toxin called choleragen. The choleragen toxin is made up of two parts, called A and B. The B portion attaches the toxin to the cells in the intestine and the A portion chemically forces electrolytes and water from the intestinal cells themselves, thus leading to massive dehydration, diminished blood loss and ultimately death.
Vibrio cholerae, the bacteria that causes cholera, only impacts humans, and can only be transmitted to new human hosts via contaminated water. It’s likely that the disease mechanism is precisely adapted to human-specific molecules in the cell walls of our small intestine, and the molecular structure of the bacteria’s toxins.
The annotation on this 19th century medical illustration reads. ‘A young woman of Vienna, 23. The same woman one hour after the onset of cholera, and four hours before death.’ Wellcome Library, London, via Flickr/the lost galleryOver millennia, both the disease-causing organism (pathogen) and host have been evolving counter-strategies against each other: the host to evade the pathogen, and pathogen to invade the host. These battles have led to the bacteria becoming host-specific, and now only able to infect humans.
The cholera vaccine works by taking advantage of this close host/pathogen relationship. It inhibits the action of the B portion of the cholera toxin, hence it prevents the bacteria from attaching to the intestinal wall.
Other waterborne diseases are caused by other pathogens (although the specific mechanisms or molecules involved differ). In some cases, as in cholera, the molecules required for infection are host-specific. Whilst other pathogens are not species specific, they are often associated with more closely-related species than less closely-related species. For example, foot and mouth disease affects cattle, sheep, deer and pigs, because they are all cloven-hoofed animals (Artiodatyla) and thus closely-related species.
Hippopotamuses (Hippopotamus amphibious and Choeropsis liberiensis) are more closely related to cetaceans (whales and dolphins), than humans, and therefore it is not surprising that they have different pathogens. That being said, hippopotamuses, like other animals, are likely to suffer from loose stools (dung) from time to time, whether due to other pathogens, or the quality of the huge amounts of plant material they ingest on a daily basis.
Dung is super important in hippopotamus society. Hippopotamus defecation or “dung showering” involves flicking their tail at the same time as defecating to distribute their dung far and wide, hence dung is used to mark their territory and assert dominance.
If hippopotamus dung spread a disease like cholera, it could be rapidly fatal for large populations. It is likely that the individuals affected would be removed by natural selection. Those that were resistant, or only mildly affected, would overcome the disease and live on to produce disease-resistant offspring. Over time, it is therefore likely hippopotamuses have adapted to their aquatic environments and thus rarely, if ever, become infected with waterborne diseases.
* Email your question to alwayswondered@theconversation.edu.au
* Tell us on Twitter by tagging @ConversationEDU with the hashtag #alwayswondered, or
* Tell us on Facebook
Julie Old does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
'No one is steering the ship': five lessons learned (or not) since the SA blackout
A year ago, the power system went down in South Australia. Homes and businesses across the state were without electricity for hours, some for days. While its specific causes have already been worked through, the nation’s most widespread blackout in decades quickly became a symbol of “Australia’s energy crisis”.
One year on, it is time to ask: how far have we come and what have we learned? Some progress has been made, but there is still a long way to go before Australians can rest assured that they have an affordable, reliable and sustainable energy system.
While politicians have been beating their chests, companies pleading their case, and energy institutions busying themselves with the search for solutions, five lessons have emerged from the great SA blackout.
Read more: Baffled by baseload? Dumbfounded by dispatchables? Here’s a glossary of the energy debate
1. Big storms cause blackouts – and blackouts cause big (media) stormsThere’s nothing quite like a blackout to focus the minds of politicians and industry alike, and generate momentum for energy policy reform.
In the blackout’s immediate aftermath, federal and state energy ministers together commissioned Chief Scientist Alan Finkel to review the National Electricity Market. Finkel presented his findings in June this year.
So far, 49 of his 50 recommendations have been accepted – all except the contentious Clean Energy Target. The blackout served as a reminder that electricity obeys the laws of physics, not of governments, and that system security is paramount. It has already prompted some important reforms that reduce the chances of future blackouts, such as new rules for extreme power system conditions and new obligations on network businesses.
2. Energy is now a political playthingWhile being in the spotlight has created the opportunity for much-needed reforms, it has also made energy policy a political plaything.
Consider the petty public stoush between Labor SA Premier Jay Weatherill and Liberal federal energy minister Josh Frydenberg in March, or the lump of coal that was passed around Federal Parliament back in February. Politicians have created a false contest between coal and renewables, instead of working together to fix the real policy problems.
Meanwhile, consumers have been left in the dark, both literally and metaphorically. Energy institutions and companies have largely failed to explain themselves, and what is going on, to the people that matter most.
Reports from the market operator, written in technical language for industry, have triggered media panic, and politicians have seized these moments to point the finger rather than reassure the public.
Read more: The day Australia was put on blackout alert
It’s not easy. These problems are complex, and no decent explanation will fit a media soundbite. But consumers must be brought along on the journey; confusion creates unnecessary fear, unhelpful reactions, and false expectations.
3. In a crisis, politicians will act – whether or not it helpsPoliticians are understandably keen to act to keep the lights on. The SA government responded by announcing a go-it-alone Energy Plan. Some other states, and the federal government, are now buying or contracting for new electricity generation and storage.
Some interventions help, but others could make matters worse. We have seen a lot of policy on the run in the past year, yet state and federal governments continue to ignore the policy changes that would make the biggest difference.
New generation and storage will be needed to bring down electricity prices, reduce emissions, and avoid supply shortfalls as older power stations are closed. Governments are jumping in to build that generation. But this could force existing generation out of the market, making the problem worse.
Industry has made it clear that policy stability, including a credible emissions reduction mechanism, is needed to enable appropriate investments to be made. Yet stability and predictability in energy and climate change policy have been sorely lacking over the past decade.
If governments can collectively agree to implement Finkel’s plan in full, this would give the market more certainty on how emissions will be cut over time, and how the entry of new technologies and the exit of old power stations will be managed.
Laying out a path from where we are today to where we want to be in future is essential. Without it, uncertainty will continue to paralyse investors and drive up electricity prices.
4. All hands are on deck, but no one is steering the shipThere is a lot going on, but it is still not clear where it is all headed. Since the blackout there has been unprecedented attention on the energy sector. Everyone is busily trying to solve this, but from their own “silo”.
The sector has always suffered from a bit of a leadership vacuum. The top policy body, the COAG Energy Council, can be compromised by partisan politics and conflicts of interest, or simply bogged down in process.
Yet it is important that Canberra works with the states and territories, because each government has different legislative levers and political priorities that affect the national energy system. Policy leadership will be crucial throughout the transition to a cleaner energy future. The COAG Energy Council needs to focus on the core strategic issues, and it will need clear guidance from the sector to do so.
5. We should be able to avoid blackouts this summer, but longer-term solutions are still neededWe are certainly better prepared for the coming summer as a result of lessons learned from the SA blackout and the renewed attention on energy policy reform. Back-up generation and demand-response schemes are being organised, new energy storage is being built, and this week Prime Minister Malcolm Turnbull struck a deal that should ensure adequate domestic gas supplies.
If all goes to plan, we should be able to avoid problems this summer. But we shouldn’t be relying on emergency measures every year. Longer-term solutions are needed, and these will require continued action, building on the momentum of the past year.
Read more: A year since the SA blackout, who’s winning the high-wattage power play?
The SA blackout was a wake-up call for the sector, triggering much-needed new thinking and some early reforms. But it has also thrown energy into the spotlight, requiring the sector to lift its game, particularly in communicating with the public and each other.
Looking back on the past year, we have come a long way, but it is still not clear where we are going and who will steer us there. Australians must hope that the new Energy Security Board, which includes the heads of the three main energy institutions, can help state and federal governments chart a steady course.
A shared sense of direction, across states and across party lines, is needed to focus the sector on the horizon, rather than on the waves below.
Kate Griffiths does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Debris from the 2011 tsunami carried hundreds of species across the Pacific Ocean
When a foreign species arrives in a new environment and spreads to cause some form of economic, health, or ecological harm, it’s called a biological invasion. Often stowing away among the cargo of ships and aircraft, such invaders cause billions of dollars of economic loss annually across the globe and have devastating impacts on the environment.
While the number of introductions which eventually lead to such invasions is rising across the globe, most accidental introduction events involve small numbers of individuals and species showing up in a new area.
But new research published today in Science has found that hundreds of marine species travelled from Japan to North America in the wake of the 2011 Tōhoku earthquake and tsunami (which struck the east coast of Japan with devastating consequences).
Read more: Widespread invasive species control is a risky business
Marine introductions result from biofouling, the process by which organisms start growing on virtually any submerged surface. Within days a slimy bacterial film develops. After months to a few years (depending on the water temperature) fully formed communities may be found, including algae, molluscs such as mussels, bryozoans, crustaceans, and other animals.
Current biosecurity measures, such as antifouling on ships and border surveillance, are designed to deal with a steady stream of potential invaders. But they are ill-equipped to deal with an introduction event of the scale recorded along most of the North American coast. This would be just as true for Australia, with its extensive coastlines, as it is for North America.
Mass marine migration Marine animals were transported vast distances on tsunami debris. Carla Schaffer / AAASThis research, led by James Carlton of Williams College, shows that over a few years after the 2011 earthquake and tsunami, many marine organisms arrived along the west coast of North America on debris derived from human activity. The debris ranged from small pieces of plastic to buoys, to floating docks and damaged marine vessels. All of these items harboured organisms. Across the full range of debris surveyed, scores of individuals from roughly 300 species of marine creatures arrived alive. Most of them were new to North America.
The tsunami swept coastal infrastructure and many human artefacts out to sea. Items that had already been in the water before the tsunami carried their marine communities along with them. The North Pacific Current then transported these living communities across the Pacific to Alaska, British Columbia, Oregon, Washington and California.
Japanese tsunami buoy with Japanese oyster Crassostrea gigas, found floating offshore of Alsea Bay, Oregon in 2012. James T. CarltonWhat makes this process unusual is the way a natural extreme event – the earthquake and associated tsunami – gave rise to an extraordinarily large introduction event because of its impact on coastal infrastructure. The researchers argue that this event is of unprecedented magnitude, constituting what they call “tsunami-driven megarafting”: rafting being the process by which organisms may travel across oceans on debris – natural or otherwise.
It’s not known how many of these new species will establish themselves and spread in their new environment. But, given what we know about the invasion process, it’s certain at least some will. Often, establishment and initial population growth is hidden, especially in marine species. Only once it is either costly or impossible to do something about a new species, is it detected.
Biosecurity surveillance systems are designed to overcome this problem, but surveillance of an entire coast for multiple species is a significant challenge.
Perhaps one of the largest questions the study raises is whether this was a once off event. Might similar future occurrences be expected? Given the rapid rate of coastal infrastructure development, the answer is clear: this adds a new dimension to coastal biosecurity that will have to be considered.
Investment in coastal planning and early warning systems will help, as will reductions in plastic pollution. But such investment may be of little value if action is not taken to adhere to, and then exceed, nationally determined contributions to the Paris Agreement. Without doing so, a climate change-driven sea level rise of more than 1 m by the end of the century may be expected. This will add significantly to the risks posed by the interactions between natural extreme events and the continued development of coastal infrastructure. In other words, this research has uncovered what might be an increasingly common new ecological process in the Anthropocene – the era of human-driven global change.
Steven Chown is the President of the Scientific Committee on Antarctic Research.
A year since the SA blackout, who's winning the high-wattage power play?
It’s a year to the day since the entire state of South Australia was plunged into darkness. And what a year it’s been, for energy policy geeks and political tragics alike.
Parked at the western end of the eastern states’ electricity grid, South Australia has long been an outlier, in energy policy as well as geography. Over the past decade it has had a tempestuous relationship with the federal government, be it Labor or Coalition. As with water policy, the South Australians often suspect they are being left high and dry by their upstream neighbours.
Read more: South Australia’s energy plan gives national regulators another headache
The policy chaos over the carbon price left the Renewable Energy Target as a far more prominent investment signal than it would otherwise have been. South Australia carried on attracting wind farms, which earned more than their fair share of the blame for high electricity prices.
On September 28, 2016, a “once-in-50-year storm” blew over a string of electricity pylons, tripping the whole state’s power grid. While the blackout, which lasted 5 hours in Adelaide and longer elsewhere, was still unfolding, critics of renewables took a leap into the dark as part of a wider blame game.
Despite being described as a “confected conflict”, the skirmish was serious enough to prompt the federal government to commission Chief Scientist Alan Finkel’s landmark review of the entire National Electricity Market, with a deadline of mid-2017.
Meanwhile, in early December, federal environment minister Josh Frydenberg was forced to backtrack after saying the Coalition was prepared to consider an emissions intensity scheme. SA Premier Jay Weatherill was unamused by the flip-flop and threatened to get together with other states to go it alone on carbon pricing.
February saw a series of “load shedding” events during a heatwave, which left some Adelaide homes once more without power and saw the grid wobble in NSW too. (It should be noted that the now infamous Liddell power station was unable to increase its output during the incident.)
Policy by tweetIt was then that Twitter entered the fray. The “accidental billionaire” Mike Cannon-Brookes was asking Solar City chief executive Lyndon Rive how quickly a battery storage system might be up and running. Rive’s cousin, a certain Elon Musk, intervened with his famous offer:
Within days, both Weatherill and Turnbull had had conversations with Musk, and Turnbull announced a “Snowy Hydro 2.0” storage proposal.
Meanwhile, Weatherill unveiled his SA Energy Plan, which the Guardian called a “survivalist fix of last resort”. We now know that the plan cost A$1 million to produce.
Then, on March 16, at the launch of a 5-megawatt “virtual power plant” in Adelaide, Weatherill had some choice words for Frydenberg who, entertainingly enough, was standing right next to him:
I’ve got to say, it is a little galling to be standing here, next to a man that’s been standing up with his prime minister, bagging South Australia at every step of the way over the last six months… And for you to then turn around, in a few short months, when there’s a blackout, and point the finger at SA for the fact that our leadership in renewable energy was the cause of that problem is an absolute disgrace.“
Frydenberg kept a notably low profile for a while after this.
Finkel fires upIn June, Finkel released his keenly awaited review. A significant number of Liberals and Nationals didn’t like his suggested Clean Energy Target, and immediately set about trying to insert coal into it.
Despite being conceived as an acceptable compromise, the Clean Energy Target was bashed from both sides. It was criticised as too weak to reach Australia’s emissions target and little more than "business as usual”, but was also “unconscionable” to former Prime Minister Tony Abbott.
Weatherill’s next major stand-alongside was an even bigger deal than the Frydenberg stoush. On July 7, he and Musk announced that part of his earlier energy SA plan would become reality: a 129-megawatt-hour lithium-ion battery farm, to be built alongside a wind farm in Jamestown.
Speaking at a book launch, Weatherill used the f-word to describe specific media opponents of renewables, earning himself opprobrium in the pages of The Australian, and admiration in more progressive areas of social media.
Federal treasurer Scott Morrison returned fire, deriding the battery farm as “a Big Banana”.
However, there was another big announcement in Weatherill’s locker: a A$650-million concentrated solar thermal power plant to be built near Port Augusta, with potential for more.
Quietly, the “energy security target” component of the SA plan, which had been rubbished, was deferred, while a renewables-based “minigrid” on the Yorke peninsula was announced.
Whatever next?What will happen now? “Events, dear boy, events,” as Harold MacMillan didn’t say. Musk is back in Adelaide to talk about his Mars mission, with an appearance scheduled for Jamestown. Would anyone bet against another SA government announcement? More batteries? Electric cars? Space planes…?
The Jamestown battery should come online in December (or it’s free!). Weatherill will presumably be hoping that Turnbull’s government staggers on, bleeding credibility and beefing up the anti-Liberal protest vote until the March 2018 state election, and that they continue to make themselves look a like a rabble over Finkel’s Clean Energy Target.
At the same time, he will also fervently hope there isn’t another big power crisis, and that the A$2.6 million of public money he spent making sure everyone knows about his energy plans provides effective insulation from any shocks.
Read more: Explainer: what can Tesla’s giant South Australian battery achieve?
The whole saga shows how policy windows can open up in unexpected ways. An attempt to blast a new technology fails, and a politician at state level sees no option but to act because of federal inadequacy. It’s happening in California too.
Judging by his interviews with me and the Guardian’s Katharine Murphy, Weatherill has found his signature issue – making lemonade from the huge lemon he was served last September. As another commentator wrote:
Far from being the last nail in the Weatherill government’s electoral coffin, the power crisis has perversely breathed new life into Labor’s re-election hopes… It is turning its own failures on energy security into a single-issue platform on which to campaign.
Weatherill is trying to build an innovation ecosystem for clean energy technology. Announcing a tender last month, Weatherill said his government is “looking for the next generation of renewable technologies and demand-management technologies to maintain our global leadership”.
And when do applications for that tender close? Well, it may be a coincidence, but the deadline is 5pm today – exactly a year since his state’s darkest hour.
We may survive the Anthropocene, but need to avoid a radioactive 'Plutocene'
On January 27, 2017, the Bulletin of the Atomic Scientists moved the arms of its doomsday clock to 2.5 minutes to midnight – the closest it has been since 1953. Meanwhile, atmospheric carbon dioxide levels now hover above 400 parts per million.
Why are these two facts related? Because they illustrate the two factors that could transport us beyond the Anthropocene – the geological epoch marked by humankind’s fingerprint on the planet – and into yet another new, even more hostile era of our own making.
My new book, titled The Plutocene: Blueprints for a post-Anthropocene Greenhouse Earth, describes the future world we are on course to inhabit, now that it has become clear that we are still busy building nuclear weapons rather than working together to defend our planet.
Read more: An offical welcome to the Anthropocene epoch – but who gets to decide it’s here?.
I have coined the term Plutocene to describe a post-Anthropocene period marked by a plutonium-rich sedimentary layer in the oceans. The Anthropocene is very short, having begun (depending on your definition) either with the Industrial Revolution in about 1750, or with the onset of nuclear weapons and sharply rising greenhouse emissions in the mid-20th century. The future length of the Plutocene would depend on two factors: the half-life of radioactive plutonium-239 of 24,100 years, and how long our CO₂ will stay in the atmosphere – potentially up to 20,000 years.
During the Plutocene, temperatures would be much higher than today. Perhaps they would be similar to those during the Pliocene (2.6 million to 5.3 million years ago), when average temperatures were about 2℃ above those of pre-industrial times, or the Miocene (roughly 5.3 million to 23 million years ago), when average temperatures were another 2℃ warmer than that, and sea levels were 20–40m higher than today.
Under these conditions, population and farming centres in low coastal zones and river valleys would be inundated, and humans would be forced to seek higher latitudes and altitudes to survive – as well as potentially having to contend with the fallout of nuclear conflict. The most extreme scenario is that evolution takes a new turn – one that favours animals best equipped to withstand heat and radiation.
Climates pastWhile we have a range of tools for studying prehistoric climates, including ice cores and tree rings, these methods do not of course tell us what the future holds.
However, the basic laws of physics, the principles of climate science, and the lessons from past and current climate trends, help us work out the factors that will dictate our future climate.
Broadly speaking, the climate is shaped by three broad factors: trends in solar cycles; the concentration of atmospheric greenhouse gases; and intermittent events such as volcanic eruptions or asteroid impacts.
Solar cycles are readily predicted, and indeed can be seen in the geological record, whereas intermittent events are harder to account for. The factor over which we have the most control is our own greenhouse emissions.
CO₂ levels have previously climbed as high as 2,000 parts per million (ppm), most recently during the early Eocene, roughly 55-45 million years ago. The subsequent decline of CO₂ levels to just a few hundred parts per million then cooled the planet, creating the conditions that allowed Earth’s current inhabitants (much later including humans) to flourish.
But what of the future? Based on these observations, as reported by the Intergovernmental Panel on Climate Change (IPCC), several projections of future climates indicate an extension of the current interglacial period by about 30,000 years, consistent with the longevity of atmospheric CO₂.
If global warming were to reach 4℃, as suggested by Hans Joachim Schellnhuber, chief climate advisor to the German government, the resulting amplification effects on the climate would pose an existential threat both to nature and human civilisation.
Barring effective sequestration of carbon gases, and given amplifying feedback effects from the melting of ice sheets, warming of oceans, and drying out of land surfaces, Earth is bound to reach an average of 4℃ above pre-industrial levels within a time frame to which numerous species, including humans, may hardly be able to adapt. The increase in evaporation from the oceans and thereby water vapour contents of the atmosphere leads to mega-cyclones, mega-floods and super-tropical terrestrial environments. Arid and semi-arid regions would become overheated, severely affecting flora and fauna habitats.
The transition to such conditions is unlikely to be smooth and gradual, but may instead feature sharp transient cool intervals called “stadials”. Increasingly, signs of a possible stadial are being seen south of Greenland.
A close analogy can be drawn between future events and the Eocene-Paleocene Thermal Maximum about 55 million years ago, when release of methane from Earth’s crust resulted in extreme rise in temperature. But as shown below, the current rate of temperature rise is far more rapid – and more akin to the planet-heating effects of an asteroid strike.
Rate of global average temperature rise during (1) the end of the last Ice Age; (2) the Paleocene-Eocene Thermal Maximum; (3) the current bout of global warming; and (4) during an asteroid impact. Author provided Mounting our defenceDefending ourselves from global warming and nuclear disaster requires us to do two things: stop fighting destructive wars, and start fighting to save our planet. There is a range of tactics we can use to help achieve the second goal, including large-scale seagrass cultivation, extensive biochar development, and restoring huge swathes of the world’s forests.
Space exploration is wonderful, but we still only know of one planet that supports life (bacteria possibly excepted). This is our home, and there is currently little prospect of realising science fiction’s visions of an escape from a scorched Earth to some other world.
Read more: What is a pre-industrial climate and why does it matter?.
Yet still we waver. Many media outlets operate in apparent denial of the connection between global warming and extreme weather. Meanwhile, despite diplomatic progress on nuclear weapons, the Sword of Damocles continues to hang over our heads, as 14,900 nuclear warheads sit aimed at one another, waiting for accidental or deliberate release.
If the clock does strike nuclear midnight, and if we don’t take urgent action to defend our planet, life as we know it will not be able to continue. Humans will survive in relatively cold high latitudes and altitudes. A new cycle would begin.
Andrew Glikson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Renewables will be cheaper than coal in the future. Here are the numbers
In a recent Conversation FactCheck I examined the question: “Is coal still cheaper than renewables as an energy source?” In that article, we assessed how things stand today. Now let’s look to the future.
In Australia, 87% of our electricity generation comes from fossil fuels. That’s one of the highest levels of fossil fuel generation in the world.
So we have important decisions to make about how we’ll generate energy as Australia’s fleet of coal-fired power stations reach the end of their operating lives, and as we move to decarbonise the economy to meet our climate goals following the Paris agreement.
What will the cost of coal-fired and renewable energy be in the coming decades? Let’s look at the numbers.
Improvements in technology will make renewables cheaperAs technology and economies of scale improve over time, the initial capital cost of building an energy generator decreases. This is known as the “learning rate”. Improvements in technology are expected to reduce the price of renewables more so than coal in coming years.
The chart below, produced by consulting firm Jacobs Group and published in the recent Finkel review of the National Electricity Market, shows the projected levelised cost of electricity (LCOE) for a range of technologies in 2020, 2030 and 2050.
The chart shows a significant reduction in the cost of solar and wind, and a relatively static cost for mature technologies such as coal and gas. It also shows that large-scale solar photovoltaic (PV) generation, with a faster learning rate, is projected to be cheaper than wind generation from around 2020.
Notes: Numbers in Figure A.1 refer to the average. For each generation technology shown in the chart, the range shows the lowest cost to the highest cost project available in Jacobs’ model, based on the input assumptions in the relevant year. The average is the average cost across the range of projects; it may not be the midpoint between the highest and lowest cost project. Large-scale Solar Photovoltaic includes fixed plate, single and double axis tracking. Large-scale Solar Photovoltaic with storage includes 3 hours storage at 100 per cent capacity. Solar Thermal with storage includes 12 hours storage at 100 per cent capacity. Cost of capital assumptions are consistent with those used in policy cases, that is, without the risk premium applied. The assumptions for the electricity modelling were finalised in February 2017 and do not take into account recent reductions in technology costs (e.g. recent wind farm announcements). Independent Review into the Future Security of the National Electricity Market
Wind prices are already falling rapidly. For example: the graph above shows the 2020 price for wind at A$92 per megawatt-hour (MWh). But when the assumptions for the electricity modelling were finalised in February 2017, that price was already out of date.
In its 2016 Next Generation Renewables Auction, the Australian Capital Territory government secured a fixed price for wind of A$73 per MWh over 20 years (or A$56 per MWh in constant dollars at 3% inflation).
In May 2017, the Victorian renewable energy auction set a record low fixed price for wind of A$50-60 per MWh over 12 years (or A$43-51 per MWh in constant dollars at 3% inflation). This is below the AGL price for electricity from the Silverton wind farm of $65 per MWh fixed over five years.
These long-term renewable contracts are similar to a LCOE, because they extend over a large fraction of the lifetime of the wind farm.
The tables and graph below show a selection of renewable energy long-term contract prices across Australia in recent years, and illustrate a gradual decline in wind energy auction results (in constant 2016 dollars), consistent with improvements in technology and economies of scale.
But this analysis is still based on LCOE comparisons – or what it would cost to use these technologies for a simple “plug and play” replacement of an old generator.
Now let’s price in the cost of changes needed to the entire electricity network to support the use of renewables, and to price in other factors, such as climate change.
Carbon pricing will increase the cost of coal-fired powerThe economic, environmental and social costs of greenhouse gas emissions are not included in simple electricity cost calculations, such as the LCOE analysis above. Neither are the costs of other factors, such as the health effects of air particle pollution, or deaths arising from coal mining.
The risk of the possible introduction of carbon emissions mitigation policies can be indirectly factored into the LCOE of coal-fired power through higher rates for the weighted average cost of capital (in other words, higher interest rates for loans).
The Jacobs report to the Finkel Review estimates that the weighted average cost of capital for coal will be 15%, compared with 7% for renewables.
The cost of greenhouse gas emissions can be incorporated more directly into energy prices by putting a price on carbon. Many economists maintain that carbon pricing is the most cost-effective way to reduce global carbon emissions.
One megawatt-hour of coal-fired electricity creates approximately one tonne of carbon dioxide. So even a conservative carbon price of around A$20 per tonne would increase the levelised cost of coal generation by around A$20 per MWh, putting it at almost A$100 per MWh in 2020.
According to the Jacobs analysis, this would make both wind and large-scale photovoltaics – at A$92 and A$91 per MWh, respectively – cheaper than any fossil fuel source from the year 2020.
It’s worth noting here the ultimate inevitability of a price signal on carbon, even if Australia continues to resist the idea of implementing a simple carbon price. Other policies currently under consideration, including some form of a clean energy target, would put similar upward price pressure on coal relative to renewables, while the global move towards carbon pricing will eventually see Australia follow suit or risk imposts on its carbon-exposed exports.
Australia’s grid needs an upgradeRenewable energy (excluding hydro power) accounted for around 6% of Australia’s energy supply in the 2015-16 financial year. Once renewable energy exceeds say, 50%, of Australia’s total energy supply, the LCOE for renewables should be used with caution.
This is because most renewable energy – like that generated by wind and solar – is intermittent, and needs to be “balanced” (or backed up) in order to be reliable. This requires investment in energy storage. We also need more transmission lines within the electricity grid to ensure ready access to renewable energy and storage in different regions, which increases transmission costs.
And, there are additional engineering requirements, like building “inertia” into the electricity system to maintain voltage and frequency stability. Each additional requirement increases the cost of electricity beyond the levelised cost. But by how much?
Australian National University researchers calculated that the addition of pumped-hydro storage and extra network construction would add a levelised cost of balancing of A$25-30 per MWh to the levelised cost of renewable electricity.
The researchers predicted that eventually a future 100% renewable energy system would have a levelised cost of generation in current dollars of around A$50 per MWh, to which adding the levelised cost of balancing would yield a network-adjusted LCOE of around A$75-80 per MWh.
The Australian National University result is similar to the Jacobs 2050 LCOE prediction for large-scale solar photovoltaic plus pumped hydro of around A$69 per MWh, which doesn’t include extra network costs.
The AEMO 100% Renewables Study indicated that this would add another A$6-10 per MWh, yielding a comparable total in the range A$75-79 per MWh.
This would make a 100% renewables system competitive with new-build supercritical (ultrasupercritical) coal, which, according to the Jacobs calculations in the chart above, would come in at around A$75(80) per MWh between 2020 and 2050.
This projection for supercritical coal is consistent with other studies by the CO2CRC in 2015 (A$80 per MWh) and used by CSIRO in 2017 (A$65-80 per MWh).
So, what’s the bottom line?By the time renewables dominate electricity supply in Australia, it’s highly likely that a price on carbon will have been introduced. A conservative carbon price of at least A$20 per tonne would put coal in the A$100-plus bracket for a megawatt-hour of electricity. A completely renewable electricity system, at A$75-80 per MWh, would then be more affordable than coal economically, and more desirable environmentally.
Ken Baldwin receives funding from the Australian Research Council.
To avoid crisis, the gas market needs a steady steer, not an emergency swerve
Rising gas costs are “the single biggest factor in the current rise in electricity prices”.
What is most noteworthy about this statement is not the fact that it is true, but that it was made by Prime Minister Malcolm Turnbull, many of whose party colleagues remain convinced that renewable energy is the real bogeyman.
Read more: Big gas shortage looming, but government stays hand on export controls
Turnbull’s comments were made in response to a report released this week by the Australian Energy Market Operator (AEMO), which yet again warns of impending gas shortages.
I argue below that renewables are a solution to the problem, rather than its cause. But first, is there actually a gas crisis?
A gas crisis?Although AEMO has predicted a potential gas shortfall for the east coast, there is no shortage of gas. Unprecedented amounts are being produced and exported as liquified natural gas (LNG) from terminals in Queensland, while at the same time the domestic market is being starved, driving prices sky-high.
Read more: Memo to COAG: Australia is already awash with gas
Without government action there could indeed be a domestic shortfall next year, but the government has already set in place a system of export restrictions to ensure domestic supply. These restrictions have not yet been invoked, but the crisis for the government is that they may have to be, and the decision must be made before November 30.
Emergency export restrictions are an intervention of last resort for a governing party built on free-market principles. They are necessary because the government has failed to champion a longer-term and less interventionist strategy, such as the reservation of a certain percentage of gas produced from new gas fields for domestic use. Western Australia has had a policy of 15% reservation for many years and other states are following suit.
Read more: Our power grid is crying out for capacity, but should we open the gas valves?
Not only is there plenty of gas being produced, but it would be relatively painless to divert some of it to the domestic market. AEMO notes several times in its report that producers have some flexibility in where they send their gas. In particular, a significant proportion of the exported gas is not under long-term contract but is destined for the overseas spot market, where surplus energy is traded for immediate delivery. This gas could easily be diverted to the east coast market.
On current projections, 63.4 petajoules of gas is destined for the spot market in 2018. To put this in context, the projected shortfall is 54PJ in 2018 and 48PJ in 2019. In other words, the uncontracted gas destined for the spot market is more than enough to make up the expected shortfall.
Turnbull is also arguing that the potential shortage is due to state bans on gas exploration and production. However, the production costs associated with as-yet-untapped reserves and resources in those states are much higher than for Queensland. Thus, even in the absence of bans it would still make sense to target untapped Queensland resources first.
Moving the gas southThe extra gas released in Queensland for domestic use would need to be transported to the southern states by pipelines that are already close to capacity. This is a potential problem. However, it could be resolved by means of “gas swaps”.
Gas produced in the southern states that has been contracted for sale through the Queensland terminals could be swapped for gas released by Queensland producers for distribution to the southern states. This would avoid bottlenecks and gas transportation costs.
In the longer term, the problem could be solved by AGL’s proposal to establish a liquid natural gas (LNG) import terminal (a regasification plant) at Western Port in Victoria.
This facility could process LNG either from Queensland or from further afield. The terminal would have the potential to provide all of Victoria’s household and business customer gas needs. If all goes to plan, AGL will begin construction in 2019 and bring the terminal into operation by 2020–21.
Our free-market government is now firmly in interventionist mode, with gas export restrictions and plans to fund a Snowy pumped hydro scheme. There is even a proposal to subsidise the continued operation of the AGL’s Liddell coal-fired power station beyond its scheduled closure in 2022.
Read more: Baffled by baseload? Dumbfounded by dispatchables? Here’s a glossary of the energy debate
But rather than continuing to badger AGL about keeping Liddell open, the government would be wiser to press the firm to bring its regasification plant online as soon as possible. Not only does it make economic sense, but it is greatly preferable from an environmental point of view.
The renewables solutionAnother way to deal with the predicted gas shortfall is to reduce demand. According to AEMO figures, gas-powered electricity generation in 2018 is expected to require 176PJ of gas, dropping to 135PJ in 2019. The lower demand in 2019 is due to increased renewable energy generation, as well as increased consumer energy efficiency.
Recalling that the shortfall in gas for 2018 is 48PJ, it is apparent that this shortfall would be wiped out by a 30% reduction in gas used for gas-fired power generation. Based on 2016 figures, that would require an increase of roughly 30% in power generation from renewables.
Given the relatively short time it now takes to build new renewable generators, this is a very promising path. Coupled with battery storage or pumped hydro, these new generators would provide dispatchable power exactly as gas does. All that is required is for the government to implement the right policy settings.
Finally, state government policies may already be taking us in this direction. The Queensland government recently announced a major program of incentives for solar power. This will significantly increase renewable power generation and dampen the demand for gas-fired power. AEMO notes this development but states explicitly that this has not been taken into account in its projections.
For whatever reason, AEMO’s final conclusion is not as gloomy as its analysis might suggest. It states that the gas situation in eastern and south eastern Australia “is expected to remain tight”. Rather than calling for action, it considers that the situation “warrants continued close attention and monitoring”. Amid all the talk of impending crisis, what we need is steady pressure on the steering wheel, rather than a sharp swerve.
Andrew Hopkins does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
How TV weather presenters can improve public understanding of climate change
A recent Monash University study of TV weather presenters has found a strong interest from free-to-air presenters in including climate change information in their bulletins.
The strongest trends in the survey, which had a 46% response rate, included:
97% of respondents thought climate change is happening;
97% of respondents believed viewers had either “strong trust” or “moderate trust” in them as a reliable source of weather information;
91% of respondents were comfortable with presenting local historical climate statistics, and just under 70% were comfortable with future local climate projections; and
97% of respondents thought their audiences would be interested in learning about the impacts of climate change.
According to several analyses of where Australians get their news, in the age of ubiquitous social media TV is still the single largest news source.
And when one considers that social media and now apps are increasingly used as the interface for sharing professional content from news organisations – which includes TV news – the reach of TV content is not about to be challenged anytime soon.
The combined audience for primetime free-to-air TV in the five capital city markets alone is a weekly average of nearly 3 million viewers. This does not include those using catch-up on portable devices, and those watching the same news within the pay TV audience. And there are those who are getting many of the same news highlights and clips through their Facebook feeds and app-based push media.
Yet the ever-more oligopolistic TV industry in Australia is very small. And professional weather presenters are a rather exclusive group: there are only 75 such presenters in Australia.
It is because of this, rather than in spite of it, that weather presenters are able to command quite a large following. And they are highly promoted by the networks themselves – on freeway billboards and station advertising. This promotion makes weather presenters among the most trusted media personalities, while simultaneously presenting information that is regarded as apolitical.
At the same time, Australians have a keen interest in talking about weather. It tends to unite us.
These three factors – trust, the impartial nature of weather, and Australian’s enthusiasm for the weather – puts TV presenters in an ideal position to present climate information. Such has been the experience in the US, where the Centre for Climate Change Communication together with Climate Matters have partnered with more than 350 TV weathercasters to present simple, easy-to-process factual climate information.
In the US it is about mainstreaming climate information as factual content delivered by trusted sources. The Climate Matters program found TV audiences value climate information the more locally based it was.
Monash’s Climate Change Communication Research Hub is conducting research as a precondition to establishing such a program in Australia. The next step is to survey the audiences of the free-to-air TV markets in the capital city markets to evaluate Australians’ appetite for creating a short climate segment alongside the weather on at least a weekly basis.
As in the US, TV audiences are noticing more and more extreme weather and want to understand what is causing it, and what to expect in the future.
The Climate Change Communication Research Hub is also involved in creating “climate communications packages” that can be tested with audiences. These are largely based on calendar and anniversary dates, and show long-term trends using these dates as datapoints.
The calendar dates could be sporting dates, or how climate can be understood in relation to a collection of years based on a specific date, or the start of a season for fire or cyclones. There has been so much extreme weather in recent years that there are plenty of anniversaries.
Let’s take November 21, 2016 – the most severe thunderstorm asthma event ever to impact Melbourne. It saw 8,500 presentations to hospital emergency departments and nine tragic deaths.
There is no reason why this event can’t be covered this year in the context of climate as a community service message. As explained in the US program, just a small increase in higher average spring temperatures leads to the production of a higher count of more potent pollen. Also, as more energy is fed into the destructive power of storm systems, the prospect of breaking up pollen and distributing it efficiently throughout population centres is heightened.
The need to be better prepared for thunderstorms in spring is thus greater, even for those who have never had asthma before.
For its data, the Climate Change Communication Research Hub will be relying on the information from the Bureau of Meteorology and the CSIRO, but will call on the assistance of a wide range of organisations such as the SES, state fire services, and health authorities in conducting its research.
In February 2018, the hub will hold a workshop with TV weather presenters as part of the Australian Meteorological and Oceanographic Society conference. At the conference the planning for the project will be introduced, with a pilot to be conducted on one media market to be rolled out to multiple markets in the second year.
The program is not intended to raise the level of concern about climate change, but public understanding of it. As survey after survey shows, Australians are already concerned about climate change. But more information is needed about local and regional impacts that will help people make informed choices about mitigation, adaptation and how to plan their lives – beyond tomorrow’s weather.
DisclosureDavid Holmes received funding from Monash University to conduct research for the project described in this article.
Baffled by baseload? Dumbfounded by dispatchables? Here's a glossary of the energy debate
Australia’s energy market is a prominent fixture in our daily news cycle. Amid the endless ideology and politics swirling around the sector, technical terms such as “baseload power” and “dispatchable generation” are thrown around so often that there is a danger the meaning of these terms can get lost in the public debate.
The term “energy crisis” is bandied around quite loosely with some confusion around whether the crisis is about prices or security of supply. The politics of this are infernal and largely avoidable if all sides of politics had paid consistent and principled attention to energy policy over the 20 years since the formation of the National Energy Market.
It’s worth setting the record straight on the meaning of some of these terms and how they relate to climate policies, new technologies and the progression of market reform and regulation in Australia.
This glossary, which is by no means exhaustive, is a first step.
Baseload powerBaseload power refers to generation resources that generally run continuously throughout the year and operate at stable output levels. The continuous operation of baseload resources makes economic sense because they have low running costs relative to other sources of power. The value of baseload plants is mostly economic, and not related to their ability to follow the constantly varying system demand.
Baseload plants include coal-fired and gas-fired combined-cycle power plants. However, Australia’s international commitment to reduce carbon emissions is curtailing the economic viability of traditional baseload sources.
Coal-fired power stations like this one at Loy Yang are being gradually retired. Wholesale market (the “National Energy Market”)The term National Energy Market is confusing because it refers to a competitive market for wholesale energy mostly on the east cost of Australia. It doesn’t include Western Australia or the Northern Territory and also includes the gas system. The National Energy Market allows all kinds of utility-scale power resources to connect to transmission system to meet large-scale power requirements.
However, industry talk about the “energy market” or even the “NEM” can also refer to the entire supply chain that includes the networks for voltage transmission, and medium- and low-voltage distribution as well as the retailing to the end consumer. The prices consumers see include all these aspects of the supply chain. This can add significantly to confusion.
The wholesale market is referred to as a “market” because there is competition between generators. Each generator places daily price “bids” to sell power and adjusts quantities in up to 10 price bands every five minutes. In this way, the sale of power is matched to the available energy and performance of the generating unit.
The market works to efficiently dispatch all variable and “dispatchable” resources to minimise the cost of electricity. The Australian Energy Market Operator (AEMO) co-ordinates the National Energy Market.
Wholesale priceThe wholesale “spot” price at which power is traded in the NEM is based on the highest accepted generator offers to balance supply and demand in each region. This is intended to encourage efficient behaviour by generators, as well as to co-ordinate efficient directing of resources.
StorageStorage refers to energy captured for later use, typically in a battery. Electricity has been expensive to store in the past, but the cost of storage is expected to continue to fall with the improvement of battery technologies. For example, lithium-ion batteries were developed for mobile communications and laptops but now are being upscaled for electric vehicles and utility-scale energy storage.
Lithium-ion batteries were developed for mobile phones, but are now being used as part of electric vehicles such as Tesla Inc’s Model S and Model X. ReutersDue to traditionally low storage levels in the system, electricity has to be mostly generated within seconds of when it is needed, otherwise the stability of the system can be put at risk. Storage technology will become more valuable as the market penetration of wind and solar power increases. With declining costs of various battery technologies, this will become easier to deliver.
Demand (and peak demand)Demand refers to the amount of electricity required to meet consumption levels at any given moment. Power refers to the rate of energy consumption in megawatts (millions of Watts, or MW), whereas energy in megawatt-hours (MWh) refers to the total consumption over a period, such as a day, month or year.
Peak demand is the highest rate of energy consumption required in a particular season, such as heating in winter or cooling in summer. It is a vital measure because it determines how much generation equipment is needed to cover for unexpected outages and maintain reliable supply.
Dispatchable generationDispatchable generation refers to a type of generation based on fossil fuels or hydro power that can be controlled to balance electricity supply and demand. More flexible power plants based on natural gas firing (such as open-cycle gas turbines or hydro power plants) can operate at partial loading and respond to short-term changes in supply and demand.
Flexibility is the key here. Storage can provide flexibility as well, either from batteries or pumped-hydro storage. The need for such resources is becoming more urgent due to retirement of the older baseload plants and the growing amount of less emissions-intensive energy sources.
Frequency controlSynchronous generators in power stations spin at around 50 cycles per second. This speed is referred to as “frequency” (denoted Hertz, symbol Hz). Controlling this constant frequency is essential for maintaining voltage and thus reliability.
If there is loss of generation somewhere, extra power is drawn through the electricity network from other plants. This causes these generators’ rotors to slow down and the system frequency to fall. A key parameter is the so-called “maximum rate of change of frequency”. The faster the frequency changes, the less time is available to take corrective action.
InertiaInertia refers to the ability of a system to maintain a steady frequency after a significant imbalance between generation and load. The higher the inertia, the slower the rate of change of frequency after a disturbance.
One critical concern is that inertia must almost always be sufficient to enable stable power. Given many coal-fired power plants are being retired, the amount of inertia is falling markedly.
Eventually power systems will need to provide inertia explicitly by adding synchronous rotors (operating independently of power generation) or by providing other power system controls that are able to respond very quickly to deviations in power system frequency. These can be based on a combination of storage and advanced power electronics already available today.
Regional markets within the National Energy MarketThe National Energy Market operates as five interconnected regional markets in the eastern states: Queensland, New South Wales, Victoria, South Australia and Tasmania. This reflects the way the power systems were originally set up under state authorities.
The National Energy Market cannot operate as a single market with a single price due to two important factors. It is not cost-effective to completely remove power transmission constraints between the state regions, and electrical losses in power transmission mean that each location requires a different price to efficiently reflect the impact of these losses.
When there are large power flows between regions, the prices can vary by up to 30% between regions due to losses. High prices occur when there is a power shortage relative to demand. Negative prices occur when load is less than the minimum stable generation committed. During periods of high prices (usually due to high demand or, less frequently, due to lower capacity) greater price differences can occur when the interconnectors reach their limits, causing very high-priced generation in the importing region to be dispatched.
The National Energy Market operates across Australia’s east coast. InterconnectorsIn view of the long distances in the National Energy Market (4000km from end to end, the longest synchronous power system in the world), there are significant constraints in transmission capacity between the state-based regions. These constraints are given special treatment called “interconnectors”.
The marginal power losses across these interconnectors are calculated every five minutes to support efficient dispatch of resources and to ensure that the spot prices in each region are efficient and consistent with prevailing supply and demand. These interconnectors have limited capacity (due to overheating and other factors), however, and AEMO carefully manages their use to ensure balancing and inertia can be provided across regions.
Ancillary services and spinning reserveAncillary services refer to a variety of methods the market requires for consistent frequency and voltage control. They maintain the quality of supply and support the stability of the power system against disturbances. This frequency control is required during normal operation to maintain the continuous balance of energy supply and demand. For this purpose some generation capacity is held in reserve in order to vary its output up and down to adjust the total system generation level.
This difference between the maximum power output and the lower operating level is called “spinning reserve”. Spinning reserve is also required for output reduction to cover sudden disconnection of load or sudden increase in solar or wind power.
Transmission upgradesThe upgrading of the transmission system, including the interconnectors, is a complex regulatory process. Transmission has a significant value across the whole electricity supply chain from producers to consumers.
This value is easy to measure given electricity market conditions at any given moment. But it’s difficult to predict when these interconnectors need to be built or replaced because some transmission assets can operate for up to 80 years. Significant co-ordination is required in planning new investments as the location and deployment timing of new renewable generation capacity is uncertain and variable.
30-minute price settlement windows (and five-minute ones)Generators are paid the spot price for all their output, and consumers (via retailers) are charged at the spot price for their consumption by AEMO. This “trading” price is calculated every 30 minutes for the purpose of transacting the cash flows (as an average of the five-minute dispatch price). This process is called “settlement”.
There is a plan in place to move to five-minute settlement over the next three years. This would help reward more flexible resources (including batteries) as they respond more efficiently to the impact of sudden changes in output.
Ariel Liebman receives funding from the Australian Federal Departments of Education and Foreign Affairs and Trade through the Australia Indonesia Centre
ross.gawler@monash.edu is affiliated with Monash Univeristy, Jacobs Consulting and McDonald Gawler Pty Ltd I occasionally consult to participants in the National Electricity Market in affiliation with Monash University or Jacobs Consulting or through McDonald Gawler Pty Ltd, a small private company. I contribute a small monthly donation to Get-up!