The Conversation

Subscribe to The Conversation feed
Updated: 23 min 11 sec ago

Australia's electricity market is not agile and innovative enough to keep up

Fri, 2017-02-17 05:12

On the early evening of Wednesday, February 8, electricity supply to some 90,000 households and businesses in South Australia was cut off for up to an hour. Two days later, all electricity consumers in New South Wales were warned the same could happen to them. It didn’t, but apparently only because supply was cut to the Tomago aluminium smelter instead. In Queensland, it was suggested consumers might also be at risk over the two following days, even though it was a weekend, and again on Monday, February 13. What is going on?

The first point to note is that these were all very hot days. This meant that electricity demand for air conditioning and refrigeration was very high. On February 8, Adelaide recorded its highest February maximum temperature since 2014. On February 10, western Sydney recorded its highest ever February maximum, and then broke this record the very next day. Brisbane posted its highest ever February maximum on February 13.

That said, the peak electricity demand in both SA and NSW was some way below the historical maximum, which in both states occurred during a heatwave on January 31 and February 1, 2011. In Queensland it was below the record reached last month, on January 18.

Regardless of all this, shouldn’t the electricity industry be able to anticipate such extreme days, and have a plan to ensure that consumers’ needs are met at all times?

Much has already been said and written about the reasons for the industry’s failure, or near failure, to do so on these days. But almost all of this has focused on minute-by-minute details of the events themselves, without considering the bigger picture.

The wider issue is that the electricity market’s rules, written two decades ago, are not flexible enough to build a reliable grid for the 21st century.

Vast machine

In an electricity supply system, such as Australia’s National Electricity Market (NEM), the amount of electricity supplied must precisely match the amount being consumed in every second of every year, and always at the right voltage and frequency. This is a big challenge – literally, considering that the NEM covers an area stretching from Cairns in the north, to Port Lincoln in the west and beyond Hobart in the south.

Continent-sized electricity grids like this are sometimes described as the world’s largest and most complex machines. They require not only constant maintenance but also regular and careful planning to ensure they can meet new demands and incorporate new technologies, while keeping overall costs as low as possible. All of this has to happen without ever interrupting the secure and reliable supply of electricity throughout the grid.

Until the 1990s, this was the responsibility of publicly owned state electricity commissions, answerable to their state governments. But since the industry was comprehensively restructured from the mid-1990s onwards, individual states now have almost no direct responsibility for any aspect of electricity supply.

Electricity is now generated mainly by private-sector companies, while the grid itself is managed by federally appointed regulators. State governments’ role is confined to one of shared oversight and high-level policy development, through the COAG Energy Council.

This market-driven, quasi-federal regime is underpinned by the National Electricity Rules, a highly detailed and prescriptive document that runs to well over 1,000 pages. This is necessary to ensure that the grid runs safely and reliably at all times, and to minimise opportunities for profiteering.

The downside is that these rules are inflexible, hard to amend, and unable to anticipate changes in technology or economic circumstances.

Besides governing the grid’s day-to-day operations, the rules specify processes aimed at ensuring that “the market” makes the most sensible investments in new generation and transmission capacity. These investments need to be optimal in terms of technical characteristics, timing and cost.

To borrow a phrase from the prime minister, the rules are not agile and innovative enough to keep up. When they were drawn up in the mid-1990s, electricity came almost exclusively from coal and gas. Today we have a changing mix of new supply technologies, and a much more uncertain investment environment.

Neither can the rules ensure that the closure of old, unreliable and increasingly expensive coal-fired power stations will occur in a way that is most efficient for the grid as a whole, rather than most expedient for individual owners. (About 3.6 gigawatts of capacity, spread across all four mainland NEM states and equalling more than 14% of current coal power capacity, has been closed since 2011; this will increase to 5.4GW and 22% when Hazelwood closes next month.)

Finally, one of the biggest drivers of change in the NEM over the past decade has been the construction of new wind and solar generation, driven by the Renewable Energy Target (RET) scheme. Yet this scheme stands completely outside the NEM rules.

The Australian Energy Markets Commission – effectively the custodian of the rules – has been adamant that climate policy, the reason for the RET, must be treated as an external perturbation, to which the NEM must adjust while making as few changes as possible to its basic architecture. On several occasions over recent years the commission has successfully blocked proposals to broaden the terms of the rules by amending the National Electricity Objective to include an environmental goal of boosting renewable energy and reducing greenhouse emissions.

Events in every state market over the past year have shown that the electricity market’s problems run much deeper than the environmental question. Indeed, they go right to the core of the NEM’s reason for existence, which is to keep the lights on. A fundamental review is surely long overdue.

The most urgent task will be identifying what needs to be done in the short term to ensure that next summer, with Hazelwood closed, peak demands can be met without more load shedding. Possible actions may include establishing firm contracts with major users, such as aluminium smelters, to make large but brief reductions in consumption, in exchange for appropriate compensation. Another option may be paying some gas generators to be available at short notice, if required; this would not be cheap, as it would presumably require contingency gas supply contracts to be in place.

The most important tasks will address the longer term. Ultimately we need a grid that can supply enough electricity throughout the year, including the highest peaks, while ensuring security and stability at all times, and that emissions fall fast enough to help meet Australia’s climate targets.

The Conversation

Hugh Saddler is a member of the Board of the Climate Institute.

Categories: Around The Web

Global clean energy scorecard puts Australia 15th in the world

Thu, 2017-02-16 12:01
The World Bank has highlighted steps to improve sustainable energy investment.

Australia ranks equal 15th overall in a new World Bank scorecard on sustainable energy. We are tied with five other countries in the tail-end group of wealthy OECD countries – behind Canada and the United States and just one place ahead of China.

Called the Regulatory Indicators for Sustainable Energy (RISE), the initiative provides benchmarks to evaluate clean energy progress, and insights and policy guidance for Australia and other countries.

RISE rates country performance in three areas - renewable energy, energy efficiency, and access to modern energy (excluding advanced countries), using 27 indicators and 80 sub-indicators. These include things like legal frameworks, building codes, and government incentives and policies. The results of the individual indicators are turned into an overall score.

The majority of wealthy countries score well in the scorecard. But when you drill down into the individual areas, the story becomes more complex. The report notes that “about half the countries with more appropriate policy environments for sustainable energy are emerging economies,” for example.

The RISE ranking. RISE report

The report relies on data up to 2015. So it does not account for recent developments such as the Paris climate conference, the Australian National Energy Productivity Plan, the widespread failure to enforce building energy regulations, and the end of Australia’s major industrial Energy Efficiency Opportunities program under the Abbott government.

Furthermore, Australian electricity demand growth has recently re-emerged after five years of decline.

But the World Bank plans to publish updated indicators every two years, so over time the indicators should become a valuable means of tracking and influencing the evolution of global clean energy policy.

Australia

Australia’s ranking masks some good, bad and ugly subtleties. For example, Australia joins Chile and Argentina as the only OECD high-income countries without some form of carbon pricing mechanism. Even the United States, whose EPA uses a “social cost of carbon” in regulatory action, and has pricing schemes in some states, meets the RISE criteria.

Australia also ranks lower than the United States for renewable energy policy, at 24th. This is due to scoring poorly in incentives and regulatory support, carbon pricing, and mechanisms supporting network connection and appropriate pricing. But we are saved somewhat by having a legal framework for renewables, and strong management of counter-party risk. It’s not clear how recent political uncertainty, and the resulting temporary collapse of investment in large renewable energy projects, may affect the score.

I have argued in the past that Australia is missing out on billions of dollars in savings through its lack of ambition on energy efficiency. Yet we rate equal 13th on this criterion, compared with 24th on renewable energy. It seems that many other countries are forgoing even more money than us.

In energy efficiency, we score highly for incentives from electricity rate structures, building energy codes and financing mechanisms for energy efficiency. Our public sector policies and appliance minimum energy standards also score well. Our weakest areas are lack of carbon pricing and monitoring, and information for electricity consumers. National energy efficiency planning, incentives for large consumers and energy labelling all do a bit better. Of course, these ratings are relative to a low global energy efficiency benchmark.

The rest of the world

Much of the report focuses on developing countries. There is a wide spread of activity here, with some countries almost without policies, and others like Vietnam and Kazakhstan doing well, ranking equal 23rd. China ranks just behind Australia’s cluster at 21st.

RISE shows that policies driving access to modern energy seem to be achieving results. The report suggests that 1.1 billion people do not have access to electricity, down from an estimated 1.4 billion a few years ago. A significant contributor to this seems to be the declining cost of solar panels and other renewable energy sources, and greater emphasis on micro-grids in rural areas.

The report highlights the importance of strategies that integrate renewables and efficiency. But it doesn’t mention an obvious example. The viability of rural renewable energy solutions is being greatly assisted by the declining cost and large efficiency improvement in technologies such as LED lighting, mobile phones and tablet computers. The overall outcome is much improved access to services, social and economic development with much smaller and cheaper renewable energy and storage systems.

The takeaway

Screen Shot at am. RISE report

RISE finds that clean energy policy is progressing across most countries. However, energy efficiency policy is well behind renewable energy. “This is another missed opportunity”, say the report’s authors, “given that energy efficiency measures are among the most cost-effective means of reducing a country’s carbon footprint.” They also note that energy efficiency policy tends to be fairly superficial.

Australia’s ranking on renewable energy policy is mediocre, while our better energy efficiency ranking is relative to global under-performance. The Finkel Review and Climate Policy Review offer opportunities to integrate renewables and energy efficiency into energy market frameworks. The under-resourced National Energy Productivity Plan could be cranked up to deliver billions of dollars more in energy savings, while reducing pressure on electricity supply infrastructure and making it easier to achieve ambitious energy targets. And RISE seems to suggest we need a price on carbon.

The question is, in a world where action on clean energy is accelerating in response to climate change and as a driver of economic and social development, will Australia move up or slip down the rankings in the next report?

The Conversation

Alan Pears has worked for government, business, industry associations public interest groups and at universities on energy efficiency, climate response and sustainability issues since the late 1970s. He is now an honorary Senior Industry Fellow at RMIT University and a consultant, as well as an adviser to a range of industry associations and public interest groups. His investments in managed funds include firms that benefit from growth in clean energy.

Categories: Around The Web

Climate change doubled the likelihood of the New South Wales heatwave

Thu, 2017-02-16 05:10
Emergency crews tackle a bushfire at Boggabri, one of dozens across NSW during the heatwave. AAP Image/NEWZULU/Karen Hodge

The heatwave that engulfed southeastern Australia at the end of last week has seen heat records continue to tumble like Jenga blocks.

On Saturday February 11, as New South Wales suffered through the heatwave’s peak, temperatures soared to 47℃ in Richmond, 50km northwest of Sydney, while 87 fires raged across the state amid catastrophic fire conditions.

On that day, most of NSW experienced temperatures at least 12℃ above normal for this time of year. In White Cliffs, the overnight minimum was 34.2℃, breaking the station’s 102-year-old record.

On Friday, the average maximum temperature right across NSW hit 42.4℃, beating the previous record of 42℃. The new record stood for all of 24 hours before it was smashed again on Saturday, as the whole state averaged 44.02℃ at its peak. At this time, NSW was the hottest place on Earth.

A degree or two here or there might not sound like much, but to put it in cricketing parlance, those temperature records are the equivalent of a modern test batsman retiring with an average of over 100 – the feat of outdoing Don Bradman’s fabled 99.94 would undoubtedly be front-page news.

And still the records continue to fall. At the time of writing, the northern NSW town of Walgett remains on target to break the Australian record of 50 days in a row above 35℃, set just four years ago at Bourke Airport.

Meanwhile, two days after that sweltering Saturday we woke to find the fires ignited during the heatwave still cutting a swathe of destruction, with the small town of Uarbry, east of Dunedoo, all but burned to the ground.

This is all the more noteworthy when we consider that the El Niño of 2015-16 is long gone and the conditions that ordinarily influence our weather are firmly in neutral. This means we should expect average, not sweltering, temperatures.

Since Christmas, much of eastern Australia has been in a flux of extreme temperatures. This increased frequency of heatwaves shows a strong trend in observations, which is set to continue as the human influence on the climate deepens.

It is all part of a rapid warming trend that over the past decade has seen new heat records in Australia outnumber new cold records by 12 to 1.

Let’s be clear, this is not natural. Climate scientists have long been saying that we would feel the impacts of human-caused climate change in heat records first, before noticing the upward swing in average temperatures (although that is happening too). This heatwave is simply the latest example.

What’s more, in just a few decades’ time, summer conditions like these will be felt across the whole country regularly.

Attributing the heat

The useful thing scientifically about heatwaves is that we can estimate the role that climate change plays in these individual events. This is a relatively new field known as “event attribution”, which has grown and improved significantly over the past decade.

Using the Weather@Home climate model, we looked at the role of human-induced climate change in this latest heatwave, as we have for other events before.

We compared the likelihood of such a heatwave in model simulations that factor in human greenhouse gas emissions, compared with simulations in which there is no such human influence. Since 2017 has only just begun, we used model runs representing 2014, which was similarly an El Niño-neutral year, while also experiencing similar levels of human influence on the climate.

Based on this analysis, we found that heatwaves at least as hot as this one are now twice as likely to occur. In the current climate, a heatwave of this severity and extent occurs, on average, once every 120 years, so is still quite rare. However, without human-induced climate change, this heatwave would only occur once every 240 years.

In other words, the waiting time for the recent east Australian heatwave has halved. As climate change worsens in the coming decades, the waiting time will reduce even further.

Our results show very clearly the influence of climate change on this heatwave event. They tell us that what we saw last weekend is a taste of what our future will bring, unless humans can rapidly and deeply cut our greenhouse emissions.

Our increasingly fragile electricity networks will struggle to cope, as the threat of rolling blackouts across NSW showed. It is worth noting that the large number of rooftop solar panels in NSW may have helped to avert such a crisis this time around.

Our hospital emergency departments also feel the added stress of heat waves. When an estimated 374 people died from the heatwave that preceded the Black Saturday bushfires the Victorian Institute of Forensic Medicine resorted to storing bodies in hospitals, universities and funeral parlours. The Victorian heatwave of January 2014 saw 167 more deaths than expected, along with significant increases in emergency department presentations and ambulance callouts.

Infrastructure breaks down during heatwaves, as we saw in 2009 when railway lines buckled under the extreme conditions, stranding thousands of commuters. It can also strain Australia’s beloved sporting events, as the 2014 Australian Open showed.

These impacts have led state governments and other bodies to investigate heatwave management strategies, while our colleagues at the Bureau of Meteorology have developed a heatwave forecast service for Australia.

These are likely to be just the beginning of strategies needed to combat heatwaves, with conditions currently regarded as extreme set to be the “new normal” by the 2030s. With the ramifications of extreme weather clear to everyone who experienced this heatwave, there is no better time to talk about how we can ready ourselves.

We urgently need to discuss the health and economic impacts of heatwaves, and how we are going to cope with more of them in the future.

We would like to acknowledge Robert Smalley, Andrew Watkins and Karl Braganza of the Australian Bureau of Meteorology for providing observations included in this article.

The Conversation

Sarah Perkins-Kirkpatrick receives funding from the Australian Research Council.

Andrew King receives funding from the ARC Centre of Excellence for Climate System Science.

Matthew Hale receives funding from the Australian Research Council.

Categories: Around The Web

How the warming world could turn many plants and animals into climate refugees

Wed, 2017-02-15 14:43
The Flinders Ranges were once a refuge from a changing climate. Shutterstock

Finding the optimum environment and avoiding uninhabitable conditions has been a challenge faced by species throughout the history of life on Earth. But as the climate changes, many plants and animals are likely to find their favoured home much less hospitable.

In the short term, animals can react by seeking shelter, whereas plants can avoid drying out by closing the small pores on their leaves. Over longer periods, however, these behavioural responses are often not enough. Species may need to migrate to more suitable habitats to escape harsh environments.

During glacial times, for instance, large swathes of Earth’s surface became inhospitable to many plants and animals as ice sheets expanded. This resulted in populations migrating away from or dying off in parts of their ranges. To persist through these times of harsh climatic conditions and avoid extinction, many populations would migrate to areas where the local conditions remained more accommodating.

These areas have been termed “refugia” and their presence has been essential to the persistence of many species, and could be again. But the rapid rate of global temperature increases, combined with recent human activity, may make this much harder.

Finding the refugia

Evidence for the presence of historic climate refugia can often be found within a species’ genome. The size of populations expanding from a refugium will generally be smaller than the parent population within them. Thus, the expanding populations will generally lose genetic diversity, through processes such as genetic drift and inbreeding. By sequencing the genomes of multiple individuals within different populations of a species, we can identify where the hotbeds of genetic diversity lie, thus pinpointing potential past refugia.

My colleagues and I recently investigated population genetic diversity in the narrow-leaf hopbush, a native Australian plant that got its common name from its use in beer-making by early European Australians. The hopbush has a range of habitats, from woodlands to rocky outcrops on mountain ranges, and has a wide distribution across southern and central Australia. It is a very hardy species with a strong tolerance for drought.

We found that populations in the Flinders Ranges have more genetic diversity than those to the east of the ranges, suggesting that these populations are the remnants of an historic refugium. Mountain ranges can provide ideal refuge, with species only needing to migrate short distances up or down the slope to remain within their optimal climatic conditions.

In Australia, the peak of the last ice age led to dryer conditions, particularly in the centre. As a result, many plant and animal species gradually migrated across the landscape to southern refugial regions that remained more moist. Within the south-central region, an area known as the Adelaide Geosyncline has been recognised as an important historic refugium for several animal and plant species. This area encompasses two significant mountain ranges: the Mount Lofty and Flinders ranges.

Refugia of the future

In times of increased temperatures (in contrast to the lower temperatures experienced during the ice age) retreats to refugia at higher elevations or towards the poles can provide respite from unfavourably hot and dry conditions. We are already seeing these shifts in species distributions.

But migrating up a mountain can lead to a literal dead end, as species ultimately reach the top and have nowhere else to go. This is the case for the American Pika, a cold-adapted relative of rabbits that lives in mountainous regions in North America. It has disappeared from more than one-third of its previously known range as conditions have become too warm in many of the alpine regions it once inhabited.

Further, the almost unprecedented rate of global temperature increase means that species need to migrate at rapid rates. Couple this with the destructive effects of agriculture and urbanisation, leading to the fragmentation and disconnection of natural habitats, and migration to suitable refugia may no longer be possible for many species.

While evidence for the combined effects of habitat fragmentation and climate change is currently scarce, and the full effects are yet to be realised, the predictions are dire. For example, modelling the twin impact of climate change and habitat fragmentation on drought sensitive butterflies in Britain led to predictions of widespread population extinctions by 2050.

Within the Adelaide Geosyncline, the focal area of our study, the landscape has been left massively fragmented since European settlement, with estimates of only 10% of native woodlands remaining in some areas. The small pockets of remaining native vegetation are therefore left quite disconnected. Migration and gene flow between these pockets will be limited, reducing the survival chances of species like the hopbush.

So while refugia have saved species in the past, and poleward and up-slope shifts may provide temporary refuge for some, if global temperatures continue to rise, more and more species will be pushed beyond their limits.

The Conversation

Matt Christmas does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

End of the road? Why it might be time to ditch your car

Wed, 2017-02-15 05:08

The average car is stationary 96% of the time. That’s a fairly consistent finding around the world, including in Australia. A car is typically parked at home 80% of the time, parked elsewhere 16% of the time, and on the move just 4% of the time. And that doesn’t include the increasing time we spend at a standstill in traffic.

Bill Ford, executive chair of the Ford Motor Company, says we’re heading for “global gridlock”. And he’s not alone in saying we cannot simply keep adding more cars to our roads.

The funny thing is that while we own more cars than ever, we’re actually using them less. You might think that’s a good thing; that we’re responding to worsening congestion and health, debt and environmental damage by opting to drive fewer kilometres.

But the problem is, we’re still choking our cities and harming our health, finances and environment by continuing to waste our resources on these increasingly dormant vehicles.

It’s not just the car itself that’s wasted. Consider the resources and infrastructure – both private and public – needed to design, mine, manufacture, ship, sell, fuel, move, store, secure, insure, regulate, police, maintain, clean, repair and dispose of all these cars.

David Owen, a staff writer with The New Yorker, has called cars “consumption amplifiers”. They are emblematic of a hyper-consumerist lifestyle that doesn’t really make us any happier.

Our declining car use gives us an opportunity. If we can adjust our car ownership patterns to match our actual needs, we can plan our lives and cities in ways that don’t revolve around a mode of transport that no longer serves us like it used to.

Fast cars?

By default, we still think of cars as fast and convenient. It might appear that way on the street, but the overall reality is quite different.

For a start, cars are a woefully inefficient way to transport a person from A to B. Typically, only around 20% of the energy from fuel combustion is converted into motion.

If we assume that the average car weighs roughly 20 times more than its driver, we can estimate that for a single-occupant car journey, with no significant other cargo, the effective fuel efficiency drops to just 1% (adding a passenger only raises this to 2%). And that’s before we take into account the broader resource and infrastructure requirements, as mentioned above, for that journey to take place.

The urban car isn’t terribly fast either. Research shows that when we take into account not only the time in transit but also the time spent working to pay for the car and its operation, the car’s average “effective speed” in cities is generally well under 13km per hour. This has been called the “urban speed paradox”. As cyclist and author Greg Foyster has pointed out, “your typical commuting cyclist can beat that without breaking a sweat”.

These and other factors have resulted in what’s called “peak car”. The average distance travelled per person by car has been declining for more than a decade. Commuting distances and average urban driving speeds have also peaked and the rate of new licences is plummeting.

Ford Motor Company’s future trends manager, Sheryl Connelly, has suggested that cars no longer symbolise freedom to this generation in the way they did to baby boomers. The rise of car-sharing schemes has also caused renting to lose its stigma. Young people now prize access over ownership.

Yet, for too many of us, a privately owned car remains the default for almost every transport task. There are times when cars are useful, but for general urban commuting, based on what we’ve seen above, it is like using a chainsaw to carve butter.

Expanding the transport toolkit

Many urban areas around the world are seeing a rapid shift away from private cars as the dominant form of transport. Areas of some cities are even going car-free while reallocating old road space to public or active transport, or back to nature.

In Australia, the City of Port Phillip has devised a plan to halt the growth in car ownership, even as the city’s population doubles, by converting hundreds of parking spots into car-share bays. Each share-car is reported to take up to 14 cars off the road, while cutting the costs of personal mobility by up to 60%.

One local resident was reported as saying the recent addition of a car-share spot at the end of his family’s street had prompted them to sell their rarely used car. “Now that there is a really good number of cars close by, we can make that move to going completely car-free.”

Then there’s the rapid development of other shared transport such as bike-share programs. By 2014, the number of cities with bike-share programs had increased to 850, up from only 68 in 2007.

Alongside all this are new planning models for activity centres, integrated transport networks, and carless or near-carless residential developments.

All the while, speed limits are decreasing, free public transport (at point of access) is increasing, and automobile and business associations are advocating for heavy investment in active and public transport.

Transport in 2017 and beyond

None of this is meant to demonise cars or their drivers, or to suggest that no one should own a car. What I am saying is that the model of everyone owning their own car is best relegated to the 20th century. This leads to the question of what the optimal level of car ownership might be, where we achieve the transport benefits without the waste, damage and expense.

What if in 2017 we focused on developing our personal and collective toolkits beyond the chainsaw, to do a better job of moving ourselves around?

You might get to know your local matrix of transport options better, from walking, cycling and skating routes to public transport, shared transport (car-share, ride-share, bike-share, taxis) and rented transport (cars, trucks, motorbikes, bicycles). Over time, you could then home in on how they work best together.

More of us could consider placing our cars in peer-based car-share or ride-share programs (informal or formal). Or we could even choose to sell our cars, and opt into one of the above schemes as a user rather than provider.

Peak car is upon us, and with it comes the opportunity to choose new models of urban transport that better match our current needs for quality, sustainable living. It is vital work. And like any good tradie, we need to make sure we have the right tools for the job.

The Conversation

Anthony James does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Want electricity reform? Start by giving power back to the states

Tue, 2017-02-14 14:44

In 1999, Australians were paying some of the lowest electricity prices in the world. Now they are among the highest. What went wrong?

Back then, the electricity network in the southern and eastern states of Australia had just been reformed to create a regional wholesale market, called the National Electricity Market. Some states – Victoria and then South Australia – privatised their industry. All states then progressively deregulated their retail electricity markets, and transferred the regulation of their remaining network monopolies to two quasi-federal regulatory agencies, the Australian Energy Regulator and the Australian Energy Markets Commission.

These reforms replaced the state governments’ electricity commissions – derided by some as Soviet-style relics – with what was purported to be a dynamic new arrangement of competition and private risk-taking.

The reforms were bolstered by reports by the Industry Commission (now the Productivity Commission) predicting that even though electricity prices were already low, they would fall further as the pressure of competition drove the industry to become more efficient and customer-focused.

The exact opposite happened. The sector’s productivity has declined sharply after tens of billions of dollars were spent on network infrastructure - particularly substations – that are not used at anything like their full capacity, even at the peak of an Australian summer.

But the failures are not just in the regulation of networks. Our retail markets compare very unfavourably with those in other countries, and our wholesale electricity markets seem to be cornered regularly – most recently in South Australia on February 8, where a lack of available generation led regulators to cut the power to some 90,000 customers.

Besides not being cheaper, the system is also no greener or more reliable. The amount of greenhouse emissions per unit of electricity produced has shown little change, and as South Australia has shown, the system can’t always keep the lights on.

Australia is blessed with a surplus of every conceivable energy resource and no shortage of technical and managerial skill. How did it come to this?

Passing the buck

The common factor underlying these failures is accountability. Officials use the phrase “all care and no responsibility” to describe the situation in which politicians become as skilled in finger-pointing as they are in showing empathy for those suffering through power blackouts.

The latest manifestation of this is the mis-characterisation of Australia’s electricity problem as one of renewables versus fossil fuels. In this view, the solution is to turn back the clock to last century’s high-emission technologies (such as coal), despite the clear risk to the private sector of doing so.

What can sensibly be done to get us out of this mess? The real problem is not renewables – it’s poor governance.

Fixing governance problems is hard, but it’s clear which direction we should take. It needs to be made obvious who should be strung up when things go wrong, or covered in glory when they go right. This clarity will in turn deliver the accountability needed to anticipate and solve problems, rather than the buck-passing and blame-dodging we’re seeing now.

The state model

There are lessons to be learned from other comparable federal countries, including Germany, the United States and Canada. They too have regional power markets and retail competition, but they have avoided the bickering between federal and state governments seen in Australia.

Their electricity networks (except interconnectors) and their retail markets are overseen by the states and provinces – as used to be the case in Australia.

When accountability is clearly established, we will know where the buck stops when the lights go out or prices become unaffordable. But under Australia’s current quasi-federal system, there is an irresistible temptation to point fingers and obfuscate if things go wrong.

Politicians past and present created this problem, and they must now rise above it. The immediate task is not to tinker with existing institutions, but instead to make some fundamental changes.

The starting point should be to recognise that electricity supply is the province (under our Constitution) of the states and territories, not the Commonwealth. It would be better to get on with fixing our own back yards than idly waiting and wishing, often without good reason, for “national coordination”.

We should reassign oversight of networks and retail markets back to the states and territories, as used to be the case. Regional transmission interconnection and market operation should continue to be federally coordinated, but the primary responsibility for pricing and reliability must rest with the states. The states might choose to delegate the oversight of various issues to central entities, but these entities must be clearly answerable to those states under the terms of their delegation.

In some respects these will be major changes, and in others, mainly a change of mindset and orientation. But for too long now we have been pushing a model of governance that does not reflect our constitutional responsibilities, and is at odds with the approach adopted in other federal countries.

It has failed and it is time to change. Other nations’ experience can give us confidence that if we make changes we can look forward to vibrant electricity markets that actually work in customers’ best interests.

The Conversation

Bruce Mountain does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

We need a comprehensive housing approach to deal with heatwaves

Tue, 2017-02-14 05:08
We can learn a lot from Queenslanders. Shutterstock

Heatwaves across much of the country this summer have revealed a serious problem with our national housing stock.

Stressed electricity networks that can’t guarantee supply have led to politicians advising people not to go home, but to go to the movies instead. The risk is that houses aren’t built to mitigate the health risks of this kind of heat.

We are using air conditioning as a band-aid instead of identifying the cause and seriousness of the condition. Australia’s continued lack of planning to solve the problem is a risky strategy.

But imagine a future where we can reliably depend on our dwellings to help us “keep our cool”. A future where we don’t have to rely on free air conditioning at the local shopping centre, and where heatwaves don’t overstress our hospitals, electricity networks, or bank accounts.

A staged and comprehensive approach can create such a future – one that would improve our individual, family and national resilience.

Smarter design and construction

Rather than being seduced by the property market’s surface bling, we need to pay more attention to the quality of the building envelope – the roof, walls, windows and floor. We can manage unwanted heat inside our homes in two key ways.

The first is to stop the heat getting in. Many aspects of a home’s design (orientation, eaves, external shading and landscaping) and construction materials (roof colour and coating, insulation, glass and window type) can help control how hot it gets inside. Guides on these design features are available at the government’s Your Home website.

The second key is having strategies to manage unwanted heat. Again, this can be done through good design (with clerestory windows, solar chimneys, roof vents, and so on) and by using the right materials. Opening and closing your house in response to the outside temperature is also important.

For example, some houses combine aspects of traditional Queenslander architecture – deep eaves, shady verandas, casement windows and louvres – with modern materials like high-performance insulation and tinted low e-glass; dense internal materials such as rammed earth; and night-time ventilation. These homes rarely surpass 30℃, despite their southeast Queensland location.

Combining Queenslander design with new materials works magic! Wendy Miller

Sometimes mechanical assistance may be required, but rather than thinking that you need to air-condition the whole house, strategies such as “cooling the occupant” or creating a “safe retreat” – similar to that of a bushfire or cyclone shelter – are worth considering.

Better ratings

It is difficult to know the best design and construction, built to protect against extreme heat, when you see it. The star rating of Australian homes is one attempt to communicate this. It is an indication of how a specific house design and its materials determine internal temperature.

While a good start, the rating system is based on past average weather patterns. What would be better is using current or even future weather data. And knowing the expected temperature of each room in the house would help to find cost-effective solutions for improving the performance of new and existing homes.

Perhaps there is even a need for a “stress test” – giving the house a “heat index” colour code similar to the weather bureau’s forecasts for heatwaves.

Do our homes need a heat risk rating? Wendy Miller

On top of this we need to know that the dwelling in question has actually been built to the standards indicated by the design. Transparent and consistent inspection practices need to be implemented, but are practically non-existent across Australia today.

Leadership from government and industry

Some of the blame for the situation can be put on ideological differences about the role of government. For instance, building regulation is seen as “red tape” rather than consumer protection. The division of powers between governments also complicates the situation.

Despite these challenges, a few barriers should be addressed as a matter of urgency.

The community needs to understand that the current building requirements, which vary by state and by dwelling type, are inadequate. They certainly do not represent a house with safe indoor temperatures throughout the year.

Greater transparency is needed. In particular, “concessions” that allow the minimum standard to be further reduced should be removed from the star rating because these have no impact on internal temperatures.

Information about the performance standard of each dwelling needs to made available to everyone in every property transaction. We need to know more about the buildings we live in – preferably before we buy or rent.

The last step is to acknowledge that housing, health and energy issues are all strongly linked. In extreme weather these are also linked to disaster management and emergency services.

Can we fix it?

Governments have already embarked on several projects, including restructuring our health system, transitioning our electricity market, updating our National Construction Code, and refining our disaster management and emergency response strategy.

But the reforms must be holistic. Policies, regulation and infrastructure planning and expenditure in any one of these sectors can lead to unintended consequences in the others. A “one system” approach would create significant economic, social and environmental opportunities for everyone.

So, can we create a better future? If our politicians, and the associated industries, have the skills, foresight and courage to put your home – our homes – into these discussions, yes we can!

The Conversation

Wendy Miller has received funding from the Australian Research Council, the National Climate Change Adaptation Research Facility, the NSW Office of Environment and Heritage and the South Australia Department of State Development.

Categories: Around The Web

The anatomy of an energy crisis - a pictorial guide, Part 1

Mon, 2017-02-13 10:15
What energy crisis?

Who could forget the energy “crises” that affected electricity supply across south-eastern Australia last year.

Firstly the Tasmanian crisis, following the Basslink outage in December 2015. With hydro storage dams at record lows following a drought on the back of aggressive storage withdrawals during the carbon tax years, Tasmania enforced drastic measures to ensure supply. Thankfully, flooding winter rains, together with the eventual restoration of Basslink in June helped resuscitate the apple isle’s energy supply. Tasmania’s hydro storages now stand at around 40% of full capacity, more than double at the same time last year.

Tasmanian hydropower storage capacity shows a strong seasonal trend, filling in winter rains, and drawing down during the summer and early autumn. Exchanges with Victoria via Basslink help provide security of supply, that was compromised by the outage in December 2015, when storages were already dangerously low on the back of the drought conditions in 2015 and aggressive draw down of storages during the Carbon-tax years to capitilise on the higher mainland spot prices.

July saw the first of the sequence of crises in South Australia that followed from, and were in many eyes, attributable to the closure of its last coal-fired power plant at Port Augusta in May of 2016.

With gas prices at record highs, and South Australia effectively isolated from Victoria due to upgrades on the main interconnector into Victoria, spot prices sky rocketed, culminating on a cold, windless winter day on July 7th. Energy consumers that had not contracted supply were at the whims of traders. Prices averaged over $1400/MWhour for the day and around $520/MWhour for the week, almost 800% above the average for that time of year.

Graphical summary of electrical power generation, demand, spot prices in, and exchange between, each of the five regions comprising the National Electricty Market. The period shown is the week of July 3rd- 9th, 2016, during the first South Australian energy crisis. Over the week, interconnector flows from Victoria into South Australia were restricted to an average of 225 MW, or about 40% of full capacity due to upgrade works. On July 7th, at the height of the crisis, the flow was limited to 166 MW. VWP = volume weighted price in $/MWhour. TOTAL.DEMAND = regional demand in MW. DISPATCH.GEN = regional generation in MW. NETINTERCHANE = net exports (positive) or imports (negative) in MW.

All that was superseded by the events of September, when extreme winds played havoc with the South Australian transmission system, toppling transmission lines in the mid north. Poorly understood default control settings automatically disconnected wind farms, leading to the interconnector tripping and a state-wide blacked out. Unanticipated problems in restarting the system exacerbated the pain.

Finally, failure of a transmission line in south-west Victoria on December 1 lead to a power loss at the aluminium smelter in Portland. The damage to “frozen” pot lines has jeopardised the smelter’s ongoing viability. As the state’s largest energy consumer and the one of the biggest regional employers, the political fallout is intense.

After the NEM’s “annus horibilus”

With 2016 very much the National Electricity Market’s (NEM) “annus horibilus”, pundits awaited the summer of 2017 with bated breath. The combination of high gas prices, frighteningly intense summer heat, a fragile and ageing energy supply system, and increasing concerns about market rules, the scene was set for “interesting times”. Whatever was to transpire it was always going to be inflamed by political point-scoring - the one commodity that seems rarely in short supply.

And so it would prove to be, even in the northern states of Queensland and New South Wales that had hither-too largely escaped the wrath of Electryone.

The summer of 2017 has seen extraordinary rises beset the spot market across the country, particularly in New South Wales and Queensland. Further blackouts in South Australia, and market interventions to avert them in New South Wales, have done little to assuage concern that our electrical power system is no longer fit for purpose. Queensland 2017 prices have been some is 400% above the historical average for this time of year.

Graphical summary of NEM operations for the period 1st January - 11th February 2017.

With the summer far from finished, our politicians remain hard at it, pointing fingers and apportioning blame, doing almost anything and everything but that which is in most short supply - namely, embracing bipartisanship. A glimmer of hope is to be found in comments from Chief Scientist Alan Finkel, who has been charged to lead a review of the security of our National Electricity Market.

What is our NEM?

To provide some guide to what is happening to the NEM, and why, I have compiled a few pictures that illustrate elements its basic anatomy. This is designed as background. In following posts in this series I will focus on the details of recent events that have so heightened the political heat.

The NEM comprises five interconnected regional jurisdictions - one for each state along the eastern seaboard and South Australia. For each region, the market operator AEMO runs a 5-minute interval, energy-only, dispatch ‘pool’, or spot market. The objective is to balance supply with demand in a way that minimises cost, based on the bids submitted by generators. It is a complicated process. Settlement prices are aggregated at half hourly intervals, and determined as the average of bid prices of the last offer needed to meet the demand for the dispatch interval.

Pictorial of the generation structure on the NEM, as of early 2017. The top half shows the five regions comprising the NEM, the bottom half the power as generated and dispatched by fuel type progressing from fossil on the left through to renewables of the right. For the period shown (1/1/2017-11/2/2107) black coal contributed 55.6% of supply (at a capacity factor of 68%), brown coal 22.7% (cf=79%), natural gas 11% (cf = 24%), hydro 5.4% (cf = 14%) and Wind 4.6% (cf=29%) Units are in MW. Note that gas is the only fuel source common to all regions, but its contribution varies significantly from over 50% in South Australia, to just a few percent in Victoria.

With the focus of the dispatch ‘pool’ being least cost electricity supply, AEMO also operates several ancillary markets to ensure the requirements for safe grid operation are met. This includes the provision of reserve supply and frequency control normally sourced from synchronous generators such as large coal plants.

AEMO also has regulatory powers to intervene in the market by demanding generation be made available in cases when the total bid capacity is insufficient. When demand exceeds total capacity, or if the available capacity cannot be made available in a timely fashion, AEMO can authorise load-shedding, effecting a re-balancing of demand to meet the available generation capacity.

Normally, large electricity consumers will contract power supply via the contract market, rather than directly through the spot market. This insures consumers against the potential for extreme price volatility allowed on the spot market, that can see prices range from between -$1000 and $14,000/MWhour. For comparison, the standard domestic retail tariff is about $250/MWhour or $0.25/kWhour.

The bid strategies of power plants reflect differences in their cost structures and performance characteristics. For example, fuel costs for brown coal generators are very low, but they are best operated at constant load. In contrast gas plants are generally much more rampable, but much higher cost. In Victoria, as a consequence gas is used almost exclusively to meet peaks in demand as illustrated in the three graphics below.

Dispatch in Victoria for the period 8/2/2017-10/2/2017, coloured by fuel source. Also shown is the Victorian demand (brown line), available generation bid into the market (top black line), and net exports as negative (bottom black line) Brown coal power generation in Victoria for the period 8/2/2017-10/2/2017, coloured by power station. Natural gas generation in Victoria for the period 8/2/2017-10/2/2017, coloured by power station.

Typically a large base-load generator, such as a brown coal plant, will bid much of its capacity into the spot market at their short run cost, to ensure a slice of the action. In contrast peaking power plants will bid at price well above marginal cost, anticipating that they will required only very occasionally. Forward contracts of various kinds help insure revenue streams for base load generators against spot prices below their long-term cost of production, and for peaking plants being available when needed.

Renewables such as wind dispatch at the whims of the weather, and because of negligible short run marginal costs, bid their output at very low prices. As a price taker, wind generation tends to drive spot prices lower, impacting the viability of other generators. As shown below, and to be discussed in more detail in a following posts, the recent events in South Australian dispatch highlights the challenges in the market when wind power output correlates poorly with demand.

Dispatch in South Australia for the period 8/2/2017 through 10/2/2017, coloured by fuel source. Also shown is the South Australian demand (brown line), available generation bid into the market (top black line), and net imports (bottom white line). Black outs on the 8th February occurred when local dispatch curve hit the available generation. At that time here was no more capacity ready to be dispatched, so AEMO instigated load-shedding. (Note that not all capacity in South Australia was bid into the market at this time.)

Finally, rooftop PV is not dispatched onto the grid, but rather is “revealed” to the market as a demand in reduction.

Why are spot prices rising

In theory, the spot market is designed to encourage a competition that ensures prices provide generators with a revenue stream that is linked to their long run marginal cost of production. If prices do depart, competitive market principles should ensure system re-balancing either through investment in new generation or the withdrawal of old. Of course, competition needs to be provided by an adequate diversity in ownership.

And so shifts in the spot prices, signalled via the contract markets, are designed to reflect the balance of demand and supply. The years 2009-2014 were characterised by persistent reductions in demand across the NEM, in part due to growing penetration of solar PV. At the same time, the addition of new wind farms to meet Renewable Energy Target contributed to a growing oversupply in the market, reflected in very subdued spot prices. For example from 2010-2014, Victorian spot prices averaged about $35/MWhour, after factoring out the carbon tax. While that price is above the cost of production for existing Victorian brown coal generators, it would be well nigh impossible to obtain financing for any new large scale generation at prices less than about 2-3 times that.

Since 2014, demand has risen in Queensland due in part to the commissioning of new LNG gas processing facilities at Curtis Island. Reductions in generation capacity in Victoria and South Australia due to closure and/or mothballing of several fossil plants (Anglesea in Victoria and Northern and Pelican Point in South Australia), has significantly tightened the supply-demand balance. Consequently, spot prices are on the rise across the NEM.

Why do spot prices vary between regions?

Spot prices averaged about $60/MWhour across last year, but vary somewhat by region, and by season, and by jurisdiction.

As shown in diagrams above the make-up of generation in each of the five regions varies considerably, leading to different cost structures. Similarly differences in demand profiles lead naturally to differences in generation fleet. Finally there are differences in market competition.

With limited interconnection capacity, along with differences in regional demand and generation portfolios, occasionally lead to large separation in spot market prices. In times of very high demand during summer heat waves and winder cold snaps, or in times when supply is constrained by infrastructure (power plant or transmission) outages or fuel supply/cost issues, spot prices can be extremely volatile.

Annual variations in spot prices for the period 1st January through 11th February, for each of the four mainland regions. Red numbers shows the average for the years prior to 2017.

Historically, South Australia has had the highest prices and Victoria the lowest. This reflects the higher much higher proportion of gas in the generation mix, its larger proportional daily/seasonal cycle between minimum and maximum demand and, arguably, competition issues. As illustrated below, peak demand in South Australia is over 250% higher than the median, compared to around 150% in Queensland. A greater relative proportion of peaking generation capacity means higher average spot prices. Competition is a particular issue in South Australia, since the closure of the Northern Power Station, as it is in Queensland.

Annual demand in South Australia and Queensland, in MW in top panel, and as percentage of median demand in bottom panel. Note the recent rise in demand in QLD due in large part to the recent commissioning of LNG plants. The bottom panel highlights the much greater daily and seasonal variability in demand in SA which sees maximum demand occasionally exceeds 2.5 time median. In comparison QLD peaks are only 1.5 times median. The boxes show 25-75 percent quartile ranges with notch at the median. Outliers more than 1.5 times IQR are shown by dots. How well suited is our market?

It is important to realise that while the physical characteristics of any power system are governed by the laws of physics, the market itself is a construct - just one of many ways of matching supply and demand. In particular as an energy-only ‘pool’ , there are questions about how well our NEM is suited to meeting the need of providing a cost effective, secure and environmentally acceptable energy supply. In particular, there is very little incentive for demand side management. Moreover, the power system does not operate in isolation, and needs to be considered with other policy settings in the gas and water markets as well as climate policy. In the following posts in this series I intend to address some of these issues with examples drawn from our recent experience on the NEM.

The Conversation Disclosure

Mike Sandiford receives funding from ARC and ANLEC.

Categories: Around The Web

How drones can help fight the war on shark attacks

Mon, 2017-02-13 05:09

Following an unprecedented series of shark attacks off Australian beaches, the need to find practical solutions is intensifying.

Aerial drones could be an important tool for reducing risk of shark attacks on our beaches within the coming years. Here’s how it would work. Drones would fly autonomously over beaches continuously scanning for sharks with image recognition software.

If a shark is detected, real-time video will be instantly sent to beach authorities, such as lifeguards. If it is a dangerous shark, appropriate action can be taken to ensure public safety, such as sounding alarms and clearing people from the water.

Like other shark bite mitigation measures, this cannot completely eliminate the possibility of a shark attack. However, it could help to reduce the risk to an acceptable level for the majority of beach users.

Importantly, the drone-based approach to shark bite mitigation does not harm sharks or other marine wildlife, such as whales, dolphins, rays and sea turtles, unlike more controversial shark control measures such as mesh nets or baited drum lines.

Surfer has a close encounter with a great white shark as seen by a drone. Testing drones

As part of the NSW government’s A$16 million Shark Management Strategy, researchers from the NSW Department of Primary Industries (NSW DPI) and Southern Cross University (SCU) have demonstrated that drones can reliably detect sharks off Australian beaches.

NSW DPI researchers have also compared the costs and benefits of marine wildlife sightings between drones and helicopters, as well as established environmental conditions suitable for drones to provide effective shark detection capabilities.

This summer, a team of SCU and DPI researchers completed an intensive drone trial on five important beaches in NSW to verify that drones will work in the long term. As part of the trial, drones performed six 20-minute patrols each morning on each beach for every day of the school holidays.

Researchers monitoring drone footage spotted great white, bull, whaler, mako and hammerhead sharks off NSW beaches. They also saw many dolphins, sea turtles and less dangerous shark species, such as shovel-nosed sharks.

These trials included experiments comparing “people versus machines” by evaluating the utility of automated flight paths and shark recognition software.

Drone captures a great white shark cruising the shallows of Northern NSW. Automating the drone-based approach

The overall objective of this research is to develop a fully automated drone-based shark surveillance system in the near future.

We envisage that a team of aerial drones could run continuous shark detection missions during the hours when most people are on our beaches.

When required, each drone will automatically take off, patrol for sharks, land itself and charge up again, ready for the next mission. If a drone detects a shark, to can alert beach authorities.

Their response will vary depending on the species of shark detected and its location. This will be immediately apparent from the live video feed and location data they receive. As well as tracking sharks, the drones will also be fitted with sirens and lights to contribute to any emergency actions.

Great white shark off a beach in Northern NSW. Problems to solve

There are still at least five major challenges to overcome before establishing a fully functional automated drone-based shark surveillance system. But these could be gradually overcome within the next few years.

Civil aviation regulations

Aviation regulations restrict the use of fully automated drones in most airspace. We could overcome this problem by modifying the law or establishing restricted zones over beaches where drones can fly.

Public safety concerns

We need to minimise the risk of injury as a result of drone failure, by making sure their flight components are failsafe and having flight paths clear of beachgoers. We also need airspace safety systems to ensure that drones are grounded when emergency and other aircraft are in the vicinity.

Public privacy concerns

A drone-based shark surveillance system would require public acceptance. For this, beachgoers need to be aware of the sorts of data being collected by the drones, and to rest assured that this does not breach privacy legislation.

Reliable hardware

Although aerial drones can already automatically take off, fly routes, land and charge themselves, it is not clear how reliably this technology will stand up to the Australian beach environment. To be effective, we will need drones that can reliably function under heavy workloads in coastal conditions. Similarly, data transfer platforms also need to be fast and reliable.

Purpose-designed software

Image analysis software needs to be further developed to automatically detect sharks with a high level of accuracy. Customised software will also need to be developed to coordinate the missions of a team of drones and to ensure seamless video streaming to the portable wireless devices of beach authorities and users.

In terms of the hardware and software challenges, there are a number of research groups racing towards solutions with the goal of commercialising their products. Once an automated drone-based technology for shark bite mitigation is in place, it should be possible to solve issues regarding legislation, safety and privacy.

Given the current rate of technological development and the falling costs of commercially available drones, fully automated drones could be reducing the risk of shark attacks on Australian beaches within five years. However, for many nervous beachgoers, this may not be soon enough.

The Conversation

Brendan Kelaher receives funding from the NSW Department Primary Industries for two PhD students working on shark projects.

Andrew Colefax receives project funding for his PhD from the NSW Department of Primary Industries (NSW DPI). He also receives additional work from the NSW DPI.

Paul Butcher works for NSW Department of Primary Industries. He receives funding from the NSW and Commonwealth Governments. He is an Adjunct Associate Professor at Southern Cross University.

Vic Peddemors receives funding from the NSW Government, the Australian Research Council and the Fisheries Research and Development Corporation (FRDC) on behalf of the Australian Government.

Bob Creese does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Why did energy regulators in South Australia deliberately turn out the lights?

Fri, 2017-02-10 13:05
High gas prices have left Adelaide's Pelican Point power station running at less than half its capacity. Peripitus/Wikimedia Commons, CC BY-SA

Last Wednesday evening, shortly after 6pm local time, around 90,000 homes and businesses in South Australia were deliberately disconnected from the electricity grid for up to an hour. In what is becoming a familiar pattern, this event provoked politicians and political actors to release a stream of claims and counter-claims about what happened and what should be done about it.

So why did it actually happen? At the start of the day, electricity was being supplied by a combination of wind power, the two interconnectors from Victoria, and a modest amount of local gas generation. As the day heated up (the temperature in Adelaide hit a maximum of 42℃), demand grew, wind generation fell away, and the volume of electricity supplied by gas generators increased rapidly.

Half-hourly total state electricity consumption reached its maximum for the day between 5.00pm and 5.30pm, by which time rooftop solar was supplying about 9% of the total. This is a very common pattern on hot days in the state.

As the sun went down, total consumption went down but solar generation went down faster. This is also very common and in theory there is more than enough capacity to meet this level of demand from gas-fired generators plus the interconnectors.

In practice, however, not all of South Australia’s gas generation was available on the day, meaning that it was not sufficient to meet demand. This happened shortly after 6.30pm local time, not helped by the fact that the maximum temperature arrived very late in the day, boosting the demand for after-work air conditioning.

Switched off

To prevent potentially widespread damage to the entire system, which might have triggered even more widespread blackouts, the Australian Energy Market Operator exercised its authority to instruct SA Power Networks (the local “poles-and-wires” distributor) to start a series of rolling disconnections of blocks of consumers – a tactic known as “load-shedding”.

Unfortunately, although the demand was only lowered by 3%, it affected a large number of consumers. It was about 40 minutes before the underlying demand had fallen to the point where available sources of generation could supply all the electricity that was required, at which time all customers were reconnected.

There are two reasons why this was deemed necessary. First, the peak demand for grid electricity was the highest for three years. Second, the amount of gas generation available on Wednesday was about 20% less than the nominally available capacity. Had the full capacity been available, the blackouts would have easily been avoided. It is this fact that has particularly angered the South Australian government, which is once again facing political derision for failing to keep the lights on.

The largest single part of the unavailable capacity is 240 megawatts – roughly 8% of the state’s total gas generation – at Pelican Point power station. Pelican Point is the highest-efficiency, lowest-emission thermal power station in South Australia. But nearly two years ago its owner, the French multinational Engie (which also owns the Hazelwood coal station in Victoria), announced that the rising cost of gas had made it too expensive to run at full capacity. Since then Pelican Point has operated only intermittently, and never at more than half of its nameplate capacity.

What a gas

High gas prices are the direct result of the huge demand for gas by the three export LNG plants at Gladstone, in Queensland. Gas that might notionally have been used to supply electricity for South Australians is instead being shipped to customers in Asia.

Meanwhile, smaller amounts of nominally available gas-fired electricity were also offline in South Australia on Wednesday. We are unlikely to know why until the official reports on the incident are published.

More importantly, however, making more gas generation capacity available is only a short-term fix and does not seriously address the changes needed to maintain, in the words of the National Electricity Objective, a secure, reliable and affordable supply of electricity.

What kinds of changes will be required? A good starting point would be to acknowledge the role that rooftop solar is already playing in reducing peak demand for electricity from the grid. On Wednesday, the peak demand for grid-supplied electricity was about two hours later and 4% lower than it would have been if no one had solar panels.

The need for load-shedding could have been completely avoided with the help of technologies that are already available for power consumers to reduce their own demand. For more than a decade, demand-side participation (which gives consumers more influence over the timing and quantity of their own electricity use) and direct load control (which involves reducing specific customers’ demand at certain times) have both been talked about, reported on, trialled, and instituted in only a desultory way. They have never been taken seriously by either industry participants or their regulators.

Large-scale electricity storage has emerged only recently because of significant cost reductions. These are just some of the likely components of a low-emission, 21st-century electricity supply system.

Almost the only positive action which governments have taken on these matters in recent times has been to establish the review by Chief Scientist Alan Finkel. The real test for the politicians will be whether they understand and act decisively on what Finkel and his colleagues have to say.

The Conversation

Hugh Saddler is a member of the Board of the Climate Institute

Categories: Around The Web

Delving through settlers' diaries can reveal Australia's colonial-era climate

Fri, 2017-02-10 05:13

To really understand climate change, we need to look at the way the climate behaves over a long time. We need many years of weather information. But the Bureau of Meteorology’s high-quality instrumental climate record only dates back to the start of the 20th century.

This relatively short period makes it hard to identify what is natural climate change and what is human-induced, particularly when it comes to things like rainfall. We really need data that go further back in time.

Natural records of climate such as tree rings and ice cores can tell us a lot about pre-industrial climate. But they too need to be verified in some way, matched against some other form of data.

So, we went hunting for some. Over two years, we looked through newspapers, manuscripts, government documents and early settlers’ diaries from Sydney, Melbourne, Adelaide and Tasmania. We took thousands of photos of letters, journals, tables and graphs. We rediscovered handwritten observations from farmers, convicts, sailors and reverends across southeastern Australia, stretching all the way back to European settlement in 1788.

Rummaging around in libraries might not seem like the best way to understand what’s been happening with our climate. But weather diaries kept by dedicated observers in the 1800s are proving important for climate research.

While there are still many observations to be rescued, the records we’ve found so far have already called into question the stability of the relationship between El Niño, La Niña and rainfall in southeastern Australia.

The records

We collected 39 different sources of weather data covering 1788–1860, with continuous observations from the mid-1830s. The numbers we’ve found so far paint a dramatic picture of the weather and climate experienced by Australia’s colonial settlers.

For example, Thomas Lempriere, who ran the Port Arthur penal settlement, recorded the harsh Tasmanian winters he suffered in the 1830s. Surgeon William Wyatt in Adelaide noted heatwaves and snowfall during the 1840s. And William Dawes, Australia’s first meteorologist, diligently observed the first drought encountered by Australia’s English settlers in 1790 and 1791.

Weather diaries kept by Reverend William Clarke in Sydney in the 1840s, now at the State Library of New South Wales. Author supplied. Connecting past and present

While the observations taken by these “weather people” are valuable insights into the climate of the past, observations made more than 150 years ago are not quite the same as those taken today. Many of the instruments were not kept in the best locations. John Pascoe Fawkner, one of Melbourne’s early settlers, even stored his thermometer in a cellar!

Differences in exposure, observation techniques and instruments also mean that it’s difficult to use these observations to quantify the exact size of the temperature change since the First Fleet arrived.

However, old weather records can still tell us a lot about year-to-year climate variations. Historical rainfall observations, for example, are less prone to large biases, because rain gauges are less complex than, say, a thermometer or barometer. By using a combination of instrumental and documentary information, we can tell the story of our climate over a much longer time scale than ever before.

Flagstaff Hill in Melbourne 1858, by George Rowe. On the right you can see the weather observer taking his daily observations on the white platform, with a rain gauge behind him. State Library of Victoria

Australia’s climate is almost manic in its ability to swing between droughts and floods. Combining our rescued weather observations with modern data from similar locations means we can see this in southeastern Australia’s rainfall over the past 170 years.

Periods of low rainfall stand out, such as the mid-1840s, the Federation Drought at the turn of the 20th century, the World War II Drought in the early 1940s, and the Millennium Drought from 1997 to 2009. There are also clear times of high rainfall, including the 1870s, 1890s and 1970s.

Rainfall, and prolonged wet and dry periods, in two regions of southeastern Australia from 1840 to 2010. Adapted from Ashcroft et al. 2016.

Most of these periods are associated with El Niño and La Niña events: dry conditions in southeastern Australia are generally linked to El Niño, while wet years often coincide with La Niña. However, this is not always the case. Previous studies have found a breakdown in the relationship in the mid-20th century, and natural palaeoclimate records suggest a similar breakdown in the early 1800s.

Understanding these periods might help us better understand how El Niño and La Niña events might change in the future. But what do the observations from the weather people say?

We compared our historical rainfall data to previous El Niño/La Niña events and found a weakening in the relationship during 1920–1940 and 1835–1850. The breakdown was especially clear in data from the southern part of our study region. This is the first time the 19th-century breakdown has been seen in Australia using instrumental data.

The hunt continues

Of course, the next question is why? Why does the impact of El Niño and La Niña on Australian rainfall change over time? What happened in the mid-1800s? It might be El Niño’s cranky uncle, the Interdecadal Pacific Oscillation, or perhaps strange behaviour in the atmosphere around Antarctica.

We’re still not sure. But the weather observations taken by dedicated settlers more than 150 years ago are helping us answer these questions. Until then, the hunt continues.

The Conversation

Linden Ashcroft has received funding from the Australian Research Council.

David Karoly receives funding from the Australian Research Council Centre of Excellence for Climate System Science and an ARC Linkage grant. He is a member of the Climate Change Authority and the Wentworth Group of Concerned Scientists.

Joelle Gergis receives funding from the Australian Research Council.

Categories: Around The Web

Droughts and flooding rains already more likely as climate change plays havoc with Pacific weather

Thu, 2017-02-09 04:57

Global warming has already increased the risk of major disruptions to Pacific rainfall, according to our research published today in Nature Communications. The risk will continue to rise over coming decades, even if global warming during the 21st century is restricted to 2℃ as agreed by the international community under the Paris Agreement.

In recent times, major disruptions have occurred in 1997-98, when severe drought struck Papua New Guinea, Samoa and the Solomon Islands, and in 2010-11, when rainfall caused widespread flooding in eastern Australia and severe flooding in Samoa, and drought triggered a national emergency in Tuvalu.

These rainfall disruptions are primarily driven by the El Niño/La Niña cycle, a naturally occurring phenomenon centred on the tropical Pacific. This climate variability can profoundly change rainfall patterns and intensity over the Pacific Ocean from year to year.

Rainfall belts can move hundreds and sometimes thousands of kilometres from their normal positions. This has major impacts on safety, health, livelihoods and ecosystems as a result of severe weather, drought and floods.

Recent research concluded that unabated growth in greenhouse gas emissions over the 21st century will increase the frequency of such disruptions to Pacific rainfall.

But our new research shows even the greenhouse cuts we have agreed to may not be enough to stop the risk of rainfall disruption from growing as the century unfolds.

Changing climate

In our study we used a large number of climate models from around the world to compare Pacific rainfall disruptions before the Industrial Revolution, during recent history, and in the future to 2100. We considered different scenarios for the 21st century.

One scenario is based on stringent mitigation in which strong and sustained cuts are made to global greenhouse gas emissions. This includes in some cases the extraction of carbon dioxide from the atmosphere.

In another scenario emissions continue to grow, and remain very high throughout the 21st century. This high-emissions scenario results in global warming of 3.2-5.4℃ by the end of the century (compared with the latter half of the 19th century).

The low-emissions scenario - despite the cuts in emissions - nevertheless results in 0.9-2.3℃ of warming by the end of the century.

Increasing risk

Under the high-emissions scenario, the models project a 90% increase in the number of major Pacific rainfall disruptions by the early 21st century, and a 130% increase during the late 21st century, both relative to pre-industrial times. The latter means that major disruptions will tend to occur every four years on average, instead of every nine.

The increase in the frequency of rainfall disruption in the models arises from an increase in the frequency of El Niño and La Niña events in some models, and an increase in rainfall variability during these events as a result of global warming. This boost occurs even if the character of the sea-surface temperature variability arising from El Niño and La Niña events is unchanged from pre-industrial times.

Although heavy emissions cuts lead to a smaller increase in rainfall disruption, unfortunately even this scenario does not prevent some increase. Under this scenario, the risk of rainfall disruption is projected to be 56% higher during the next three decades, and to remain at least that high for the rest of the 21st century.

The risk has already increased

While changes to the frequency of major changes in Pacific rainfall appear likely in the future, is it possible that humans have already increased the risk of major disruption?

It seems that we have: the frequency of major rainfall disruptions in the climate models had already increased by around 30% relative to pre-industrial times prior to the year 2000.

As the risk of major disruption to Pacific rainfall had already increased by the end of the 20th century, some of the disruption actually witnessed in the real world may have been partially due to the human release of greenhouse gases. The 1982-83 super El Niño event, for example, might have been less severe if global greenhouse emissions had not risen since the Industrial Revolution.

Most small developing island states in the Pacific have a limited capacity to cope with major floods and droughts. Unfortunately, these vulnerable nations could be exposed more often to these events in future, even if global warming is restricted to 2℃.

These impacts will add to the other impacts of climate change, such as rising sea levels, ocean acidification and increasing temperature extremes.

The Conversation

This research was supported by the National Environmental Science Programme and the Australian Climate Change Science Programme.

Brad Murphy, Christine Chung, François Delage, and Hua Ye do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

A wolf in dogs' clothing? Why dingoes may not be Australian wildlife's saviours

Thu, 2017-02-09 04:57
Dingoes are often promoted as a solution to Australia's species conservation problems. Dingo image from www.shutterstock.com

Dingoes have often been hailed as a solution to Australia’s threatened species crisis, particularly the extreme extinction rate of the country’s small mammals.

But are dingoes really the heroes-in-waiting of Australian conservation? The truth is that no one knows, although our recent research casts a shadow over some foundations of this idea.

The notion of dingoes as protectors of Australian ecosystems was inspired largely by the apparently successful reintroduction of wolves into Yellowstone National Park in the United States. But Australia’s environments are very different.

Cascading species

To understand the recent excitement about wolves, we need to consider an ecological phenomenon known as “trophic cascades”. The term “trophic” essentially refers to food, and thus trophic interactions involve the transfer of energy between organisms when one eats another.

Within ecosystems, there are different trophic levels. Plants are typically near the base; herbivores (animals that eat plants) are nearer the middle; and predators (animals that eat other animals) are at the top.

The theory of trophic cascades describes what happens when something disrupts populations of top-order predators, such as lions in Africa, tigers in Asia, or Yellowstone’s wolves.

The wolves’ decline allowed herbivores, such as elk, to increase. In turn, the growing elk population ate too much of the shrubby vegetation alongside rivers, which, over time, changed from being mostly willow thickets to grassland. Then another herbivore – beavers – that relies on willows went locally extinct. This in turn affected the ecology of the local streams.

Wolves play a key role in Yellowstone’s ecosystems. Wolf image from www.shutterstock.com

Without beavers to engineer dams, local waterways changed from a series of connected pools to eroded gutters, with huge flow-on effects for smaller aquatic animals and plants.

Now, the reintroduction of wolves appears to have reduced the impact of elk on vegetation, some riparian areas have regenerated, some birds have returned and there are signs of beavers coming back. That said, wolf reintroduction has not yet fully reversed the trophic cascade.

Comparing apples with quandongs

Sturt National Park, in the New South Wales outback, has been nominated as an experimental site for reintroducing dingoes. Recently, we compared the environment of Sturt with Yellowstone to consider how such a reintroduction might play out.

These regions are clearly very different. Both are arid, but that is where the similarity ends. Yellowstone has a stable climate and nutrient-rich soils, sits at high altitude and features diverse landscapes. Precipitation in Yellowstone hasn’t dropped below 200mm per year in more than a century.

Herds of bison in Yellowstone National Park. Helen Morgan

Yellowstone’s precipitation falls largely as heavy winter snow. Each spring the snowmelt flows in huge volumes into rivers, streams and wetlands across the landscape. This underpins a predictable supply of resources which, in turn, triggers herbivores to migrate and reproduce every year.

These predictable conditions support a wide range of carnivores and herbivores, including some of North America’s last-remaining “megafauna”, such as bison, which can tip the scales at over a tonne. Yellowstone also has many large predators – wolves, grizzly bears, black bears, mountain lion, lynx and coyotes all coexist there – along with a range of smaller predators too.

Predators in Yellowstone can be sure that prey will be available at particular times. The environment promotes stable, strong trophic links, allowing individual animals to reach large sizes. This strong relationship between trophic levels means that when the system is perturbed – for instance, when wolves are removed – trophic cascades can occur.

Unlike Yellowstone, arid Australia is dry, flat, nutrient-poor and characterised by one of the most extreme and unpredictable climates on Earth. The yearly rainfall at Sturt reaches 200mm just 50% of the time.

Australia’s Sturt Desert has a highly unpredictable climate. Helen Morgan

Australia’s arid ecosystems have evolved largely in isolation for 45 million years. In response to drought, fire and poor soils, arid Australia has evolved highly specialised ecosystems, made up of species that can survive well-documented “boom and bust” cycles.

Unlike the regular rhythm of Yellowstone life, sporadic pulses of water and fire affect and override the trophic interactions of species, between plants and herbivores, and predators and their prey. Our native herbivores travel in response to patchy and unpredictable food sources in boom times. But however good the boom, the bust is certain to follow.

Unpredictable but inevitable drought weakens trophic links between predators, herbivores and plants. Individuals die due to lack of water, populations are reduced and can only recover when rain comes again.

Our arid wildlife is very different from Yellowstone’s too. Our megafauna are long gone. So too are our medium-sized predators, such as thylacines.

Today, arid Australia’s remaining native wildlife is characterised by birds, reptiles and small mammals, along with macropods that are generally much smaller than the herbivores in Yellowstone.

Our predators are small and mostly introduced species, including dingoes, foxes and cats. None is equivalent to wolves, mountain lions or bears, which can reach more than three times the weight of the largest dingo. Wolves are wolves, and dingoes are dogs.

Wolves in dingo clothes?

What does all this mean for Australia? Yellowstone’s stable climate means that there are strong and reliable links between predators, prey and plants. By comparison, arid Australia’s climate is dramatically unstable.

This raises the question of whether we can reasonably expect to see the same sorts of relationships between species, and whether dingoes are likely to help restore Australia’s ecosystems.

We should conduct experiments to understand the roles of dingoes and the impacts of managing them. How we manage predators, including dingoes, should be informed by robust knowledge of local ecosystems, including predators’ roles within them.

What we shouldn’t do is expect that dingoes will necessarily help Australia’s wildlife, based on what wolves have done in snowy America. The underlying ecosystems are very different.

Many people are inspired by the apparently successful example of wolves returning to Yellowstone, but in Australia we should tread carefully.

Rather than trying to prove that dingoes in Australia are just as beneficial as wolves in Yellowstone, we should seek to understand the roles that dingoes really play here, and work from there.

The Conversation

Helen Morgan receives funding from the Keith and Dorothy Mackay Travelling Scholarship, University of New England, the Holsworth Wildlife Endowment Trust and Invasive Animals CRC

Guy Ballard receives funding from the Invasive Animals Cooperative Research Centre, NSW Local Land Services and the NSW National Parks & Wildlife Service.

John Thomas Hunter does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Crisis, what crisis? How smart solar can protect our vulnerable power grids

Wed, 2017-02-08 05:11

Some commentators seem to be worried that our electricity networks are facing an impending voltage crisis, citing fears that renewables (rooftop solar panels in particular) will threaten the quality of our power supply.

These concerns hinge on the fact that solar panels and other domestic generators can push up voltages, potentially making it harder for network companies to maintain stability across the grid. But what is less well understood (and far less reported) is the massive potential for local generation to actually improve the quality of our power, rather than hinder it.

A new report from our Networks Renewed project aims to show how technologies such as “smart inverters” can help to manage voltage at the household scale, rather than at substations. This would improve the quality of our power and flip the potential problem of household renewables into a solution.

Why all the fuss about voltage?

Electricity from our power points should be at roughly 230 volts, without deviating too far above or below. It fluctuates throughout the day, depending on how much power is being used.

Here’s an analogy: think of water flowing through pipes. The power lines are the pipes themselves, and the voltage is like the water pressure in the pipes – that is, the amount of force pushing the water (or electricity) along. Using large amounts of power causes the voltage to drop, rather like when the washing machine comes on while you’re having a shower; all of a sudden the pressure drops because other appliances are using the water too.

Pressure is also affected by how close the appliance is to the source. For instance, if your washing machine and shower were connected right at the foot of the dam, instead of at the end of several miles of pipes, you could have them both switched on and not notice a drop in pressure.

For an electrical distribution system, this means that the houses farthest away from the substation are the most susceptible to sagging (lower) voltage when large amounts of power are being used.

Voltage management has always been an issue for grid operators, particularly in rural locations where the power lines are longer. Low voltage on long power lines often means dim and flickering lights for residents at the end of the line.

On the flip side, overvoltages can damage sensitive electronic equipment – a bit like when the water pressure pops your garden hose off the tap.

These fluctuations can become a problem for power companies when the voltage goes outside the allowable range.

How does solar power affect voltage?

Our electricity networks were not originally built for lots of local generation sources like rooftop solar panels or small wind turbines. Until recently, power has generally flowed only in one direction, from a large (usually coal-fired) power station to consumers.

The growing number of household solar panels on the network have changed this landscape and now power flows both ways. Solar panels can make managing the grid more complex, because the voltage rises where they are generating power.

A small voltage increase is not a problem when there is enough demand for electricity. But when nobody is home in the neighbourhood, the solar power might lift the voltage beyond the upper limit.

In this case, the circuit protectors in the generator will probably trip and the solar panels will be cut off, to protect the network. This also means that the household won’t have access to (or get paid for!) the solar power it is generating.

Any customer-owned generator can affect the voltage – including solar, batteries, or diesel generators. But we tend to hear about solar because it is by far the most popular means of local generation; Australia now has more than 1.5 million homes with rooftop solar, and that figure is rising rapidly.

While some people might see this as an issue, sometimes the solution lies in the problem itself. In this case, new solar systems can offer a much more sophisticated way to manage grid voltage.

The innovation: smart inverters can control solar and batteries to help stabilise voltage on the grid. How can solar become the solution?

Traditionally, voltage management solutions are fairly blunt, affecting tens or even hundreds of properties at a time, despite the fact that conditions might be quite different at each property. The equipment used – replete with technical-sounding names such as “on-load tap changers” and “line-drop compensators” – is expensive and is often located within transformers at substations. All of this electrical engineering kit adds to the cost of energy for customers.

However, new solar and battery systems now have the intelligence to manage voltage in a cheaper and more targeted way, through their “smart” inverters. These new technologies may provide the missing link to new renewable and reliable energy sources.

This is how it works: residential solar, batteries and other generators are connected to the grid through inverters that now have embedded IoT (internet of things) communications technology. These smart inverters allow the network to “talk” to the local generator and request support services, including through what’s called reactive power (see graphic below).

Reactive power can help to raise and lower the voltage on the network, improving the quality of our power including the voltage stability. For more technical detail see our newly released report on the potential for smart inverters to help manage the grid.

Smart inverters can export or absorb both real and reactive power.

All this is only possible if network businesses are open to new, proactive ways of operating - as demonstrated by our Networks Renewed project partners United Energy in Victoria and Essential Energy in New South Wales.

This means a shift in thinking from the traditional passive customer model – we deliver energy to you! – to a more dynamic and collaborative one in which customers can actually help to manage the grid as well as using and generating power.

Sure, transitioning an entire energy system is no mean feat, but it offers an opportunity to build a better, more resilient electricity system that includes more renewable energy.

If we are smart, we will not need to trade off our climate impact with the dependability of our electricity system. We just need to be open to the new ways of solving old problems.

The Conversation

The Institute for Sustainable Futures (ISF) at the University of Technology Sydney undertakes paid sustainability research for a wide range of government, NGO and corporate clients, including energy businesses. The Networks Renewed project is funded by the Australian Renewable Energy Agency (ARENA) and the NSW and Victorian state governments, in partnership with Essential Energy, United Energy, Reposit Power, SMA Australia, and the Australian PV Institute. Lawrence McIntosh is also a partner at PV Lab Australia, a solar panel quality assurance business, and serves as the part time Principal Executive Officer of SolarShare, a community owned solar project in Canberra, ACT.

Dani Alexander is a member of the Institute for Sustainable Futures (ISF), which undertakes paid sustainability research for a wide range of government, NGO and corporate clients, including energy businesses.

Categories: Around The Web

Australia's universities are not walking the talk on going low-carbon

Wed, 2017-02-08 05:11
Australia's universities are great at green innovation, but not so good at going low-carbon themselves. PrinceArutha/Wikimedia Commons, CC BY-SA

Australian universities have a proud tradition in researching, teaching and advocating the science of climate change. The famous statistic that 97% of climate scientists agree that humans are altering the climate is courtesy of researchers at the University of Queensland. Nine of the nation’s 43 universities have been ranked “well above world standard” in environmental science, and many of the leading public voices on climate policy – such as Ross Garnaut, Will Steffen and Tim Flannery – are university professors.

The science these universities (and many others around the world) have produced is very clear. Keeping average global temperatures within 2℃ of pre-industrial levels, as per the Paris climate agreement, will require a reduction in carbon (and other long-lived greenhouse gases) of 40-70% from 2010 levels by 2050, and near-zero emissions by 2100 (see section 3.4 here).

What’s less clear is what Australian universities are actually doing about it in practical terms. Universities exist to do three things: teach, research and engage. Climate change permeates all three endeavours, and these days many academics have lost any previous reticence about expressing forthright views on political questions such as the government’s emissions targets or renewable energy policies.

Anyone who followed Australian politics during Tony Abbott’s years as opposition leader and then prime minister will recall the fierce debates over the carbon tax, direct action, and the axing of the Climate Commission. Those with good memories will remember the furious argument that erupted around the Australian National University’s decision to divest from seven resources companies.

Universities clearly know what the science says and what society needs to do about it. But it is evidently easier to say what needs to be done than to do something about it. This contrast between words and actions is shown clearly by Australian universities’ collective response to climate change.

Promises, promises

Of the 43 Australian universities, three (RMIT, UTS and CSU, of which the latter remains Australia’s only carbon-neutral university) have committed to absolute reductions in carbon emissions. A further 12 have pledged to reduce carbon emissions but have sprinkled their commitments with riders, such as reducing emissions per “gross floor area”, which would allow emissions to grow as the university expands and is inconsistent with the need to cut carbon in absolute terms.

To compile these data, I looked at all Australian universities’ 2015 annual reports, forward-looking corporate strategies, and historic mission-based compacts (performance agreements with the Commonwealth). Clearly, it is possible for universities to have a carbon target that is not mentioned in these reports, but my logic is that these documents give a clear picture of the organisation’s priorities and spending.

Worryingly, 11 universities make no mention at all of carbon-reduction policies anywhere in these documents.

The picture is no rosier for those nine universities (ANU, Griffith, JCU, Macquarie, Canberra, Melbourne, Queensland, UTS and UWA) whose environmental science has received the highest rating. Only Melbourne and Queensland mention carbon in their corporate strategies; the other seven are silent.

The same is true for 10 of the 12 universities whose researchers were involved in compiling the Intergovernmental Panel on Climate Change’s landmark Fifth Assessment Report. And if it’s not in the strategy it seems unlikely to be a priority for the university.

There are ten Australian universities that consume enough energy to be required to publish their emissions data, under the National Greenhouse and Energy Reporting Act (2007). Data from the Clean Energy Regulator show that their emissions increased by 4.6% between 2010-11 and 2014-15.

Lead by example

This poses two tricky questions for universities. First, why don’t universities act more decisively on the implications of their own climate research, while they are urging society to do so? Second, in a networked economy where knowledge is king, how will universities manage to partner with businesses to drive down greenhouse emissions, if they can’t even successfully do it themselves?

Universities are not short of funds to demonstrate how to build a low-carbon future, but they are short of partners. Currently Australian universities are at the bottom of the OECD’s rankings for fostering business partnerships and innovation. Yet the opportunities are there.

My analysis of universities’ 2015 reports shows that universities have committed to spending more than A$1.5 billion in property, plant and equipment capital works during 2016 alone (2016 annual reports have not yet been released). For comparison, the Australian Research Council awarded less than A$100 million between 2011 and 2013 for universities to research the built environment and design, meaning that it would take the ARC 50 years to match what universities spent on their own property in 2016.

Yet in spite of this huge outlay, only eight universities have committed to using their campuses as “living labs” to apply their research or to help deliver teaching and research in this field.

All universities talk of the need to forge external partnerships with government, communities and business. Yet looking at the detail, there are just 17 universities – fewer than half – that have committed themselves to trying to work across the university internally. It should be no surprise that universities are so poor at partnering with external organisations if they can’t manage it within their own organisations.

Evidence-based spending?

All of this suggests that most Australian universities are failing to take proper account of their own climate science in choosing how to run themselves. Remarkably, 25% of universities do not mention greenhouse emissions anywhere in their public reports, corporate strategies or mission-based compacts.

Less than 20% of Australian universities are using their campus development to deliver teaching and research outcomes or as a living lab to innovate. Only one university is committed to doing this in the future.

Yet meanwhile, universities have spent more than A$1.5 billion during 2016 (according to their 2015 annual reports) on their built environments. If this infrastructure spend is not used also to drive teaching and research outcomes, or to showcase how to adopt research, then it is being spent inefficiently.

If this money is being spent in a way that doesn’t help Australia hit its climate targets, and the world to live up to the Paris Agreement, then this spending is not evidenced-based. And if spending and research are not evidence-based, we really do need to worry about what tomorrow brings.

This article is based on a presentation given at the World Renewable Energy Congress in Perth on February 6.

Universities Australia deputy chief executive Catriona Jackson responds:

Australia’s universities have a wide range of energy savings and lower-carbon initiatives.

Actually there are a significant number of projects and programs in place across the Australian university sector towards greater sustainability. Many of those initiatives have also been recognised through programs such as the Green Gowns awards.

But one of the challenges for universities in modernising facilities to meet higher environmental standards is having an ongoing source of infrastructure funding.

That’s yet another reason why we’re strongly against the closure of the $3.7 billion Education Investment Fund, which has funded major building works on Australia’s university campuses.

If we want smarter buildings and cleaner technology – let alone cutting-edge research and teaching facilities – an infrastructure fund is vital.

The Conversation

Mike Burbridge receives a PHD scholarship from Co-operative Research Centre for Low Carbon Living and is currently a PHD student at Curtin University.

Categories: Around The Web

The environment needs billions of dollars more: here's how to raise the money

Tue, 2017-02-07 05:15
Australia: there's a lot of it to look after. Thomas Schoch/Wikimedia Commons, CC BY-SA

Extinction threatens iconic Australian birds and animals. The regent honeyeater, the orange-bellied parrot, and Leadbeater’s possum have all entered the list of critically endangered species.

It is too late for the more than 50 species that are already extinct, including bettongs, various wallabies, and many others. Despite international commitments, policies and projects, Australia’s biodiversity outcomes remain unsatisfactory.

A 2015 review of Australia’s 2010-2050 Biodiversity Conservation Strategy found that it has failed to “effectively guide the efforts of governments, other organisations or individuals”.

Insufficient resourcing is one cause of biodiversity loss. The challenge is impressive. Australia must tackle degradation and fragmentation of habitat, invasive species, unsustainable use of resources, the deterioration of the aquatic environment and water flows, increased fire events, and climate change.

This all requires money to support private landholders conducting conservation activities, to fund research, to manage public lands, and to support other conservation activities conducted by governments, industry, and individuals.

So where can we find the funds?

How much money is needed?

We have estimated that Australia’s biodiversity protection requires an equivalent investment to defence spending – roughly 2% of gross domestic product.

Of course, such estimates are up for debate given that how much money is required depends on what we want the environment to look like, which methods we use, and how well they work. Other studies (see also here and here point to a similar conclusion: far more money is needed to achieve significantly better outcomes.

Apart from government funding, private landholders, businesses, communities, Indigenous Australians, and non-government organisations contribute significantly to natural resource management. We were unable to quantify their collective cash and in-kind contributions, as the information is not available. But we do know that farmers spend around A$3 billion each year on natural resource management.

Nonetheless, the erosion of environmental values indicates that the level of spending required to sufficiently meet conservation targets far exceeds the amount currently being spent. The investment required is similar to value of agriculture in Australia.

Conservation doesn’t come cheap. JJ Harrison/Wikimedia Commons, CC BY-SA

Unfortunately, the concentration of wealth and labour sets a limit to what any given community can pay.

Despite a high GDP per person and very wealthy cities, Australia has fewer than 0.1 people per hectare and a wealth intensity (GDP per hectare) of less than US$2,000 due to the sparse population and income of rural Australia.

Australia’s rural population has declined sharply, from over 18% in 1960 to around 10% today. Other countries (for example in Europe) are not limited to the same degree. Even China has a greater rural resource intensity than Australia.

Rural incomes are often volatile, but environmental investments need to be sustained. The history of Landcare highlights that private landholders have struggled to secure a reliable investment basis for sustainably managing the environment.

Can government pay what is required?

If Australia is serious about the environment, we need to know who will pay for biodiversity protection (a public good). This is especially true given that it is not feasible for rural (particularly Indigenous) landholders and communities to invest the required amount.

Will government be the underpinning investor? The federal government’s current spending program on natural resource management was initiated in 2014 with an allocation of A$2 billion over four years.

This was split between the second National Landcare Program, the (now-defunded) Green Army, the Working on Country program, the Land Sector Package, the Reef 2050 plan, the Great Barrier Reef Foundation, and the Whale and Dolphin Protection Plan.

As well as federal funding, the state, territory, and local governments invest in public lands, bushfire mitigation, waste management, water management, environmental research and development, biodiversity programs, and environmental policies. Local and state government departments together spend around A$4.9 billion each year on natural resource management.

The problem is that government spending on natural resource management can not be significantly increased in the near future due to fiscal pressures and the focus on reducing budget deficits.

Show us the money

At a time when Australia is reconsidering many aspects of its environmental policies, we should address the strategy for funding natural resource management.

It should be possible to leverage more private spending on the environment preferably as part of a coordinated strategy. Diverse, market-based approaches are being used around the world.

For example, we could use market instruments such as biodiversity banking to support landholders in protecting biodiversity.

Taxation incentives, such as a generous tax offset for landholders who spend money on improving the environment, can be a very powerful catalyst and could be crucial for meeting environmental investment needs.

Evidence suggests that integrating a variety of mechanisms into a coordinated business model for the environment is likely to be the most efficient and effective approach. But this will not happen unless Australia faces the fiscal challenge of sustainability head-on.

Australia needs an innovative investment plan for the environment. By combining known funding methods and investment innovation, Australia can reduce the gap between what we currently spend and what the environment needs.

Without a more sophisticated investment strategy, it is likely that Australia will continue on the trajectory of decline.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Meet El Niño’s cranky uncle that could send global warming into hyperdrive

Mon, 2017-02-06 04:57

You’ve probably heard about El Niño, the climate system that brings dry and often hotter weather to Australia over summer.

You might also know that climate change is likely to intensify drought conditions, which is one of the reasons climate scientists keep talking about the desperate need to reduce greenhouse gas emissions, and the damaging consequences if we don’t.

El Niño is driven by changes in the Pacific Ocean, and shifts around with its opposite, La Niña, every 2-7 years, in a cycle known as the El Niño Southern Oscillation or ENSO.

But that’s only part of the story. There’s another important piece of nature’s puzzle in the Pacific Ocean that isn’t often discussed.

It’s called the Interdecadal Pacific Oscillation, or IPO, a name coined by a study which examined how Australia’s rainfall, temperature, river flow and crop yields changed over decades.

Since El Niño means “the boy” in Spanish, and La Niña “the girl”, we could call the warm phase of the IPO “El Tío” (the uncle) and the negative phase “La Tía” (the auntie).

These erratic relatives are hard to predict. El Tío and La Tía phases have been compared to a stumbling drunk. And honestly, can anyone predict what a drunk uncle will say at a family gathering?

What is El Tío?

Like ENSO, the IPO is related to the movement of warm water around the Pacific Ocean. Begrudgingly, it shifts its enormous backside around the great Pacific bathtub every 10-30 years, much longer than the 2-7 years of ENSO.

The IPO’s pattern is similar to ENSO, which has led climate scientists to think that the two are strongly linked. But the IPO operates on much longer timescales.

We don’t yet have conclusive knowledge of whether the IPO is a specific climate mechanism, and there is a strong school of thought which proposes that it is a combination of several different mechanisms in the ocean and the atmosphere.

Despite these mysteries, we know that the IPO had an influence on the global warming “hiatus” - the apparent slowdown in global temperature increases over the early 2000s.

Global temperatures are on the up, but the IPO affects the rate of warming. Author provided, data from NOAA, adapted from England et al. (2014) Nat. Clim. Change Temperamental relatives

When it comes to global temperatures we know that our greenhouse gas emissions since the industrial revolution are the primary driver of the strong warming of the planet. But how do El Tío and La Tía affect our weather and climate from year to year and decade to decade?

Superimposed on top of the familiar long-term rise in global temperatures are some natural bumps in the road. When you’re hiking up a massive mountain, there are a few dips and hills along the way.

Several recent studies have shown that the IPO phases, El Tío and La Tía, have a temporary warming and cooling influence on the planet.

Rainfall around the world is also affected by El Tío and La Tía, including impacts such as floods and drought in the United States, China, Australia and New Zealand.

In the negative phase of the IPO (La Tía) the surface temperatures of the Pacific Ocean are cooler than usual near the equator and warmer than usual away from the equator.

Since about the year 2000, some of the excess heat trapped by greenhouse gases has been getting buried in the deep Pacific Ocean, leading to a slowdown in global warming over about the last 15 years. It appears as though we have a kind auntie, La Tía perhaps, who has been cushioning the blow of global warming. For the time being, anyway.

The flip side of our kind auntie is our bad-tempered uncle, El Tío. He is partly responsible for periods of accelerated warming, like the period from the late 1970s to the late 1990s.

The IPO has been in its “kind auntie” phase for well over a decade now. But the IPO could be about to flip over to El Tío. If that happens, it is not good news for global temperatures – they will accelerate upwards.

Models getting better

One of the challenges to climate science is to understand how the next decade, and the next couple of decades, will unravel. The people who look after our water and our environment want to know things like how fast our planet will warm in the next 10 years, and whether we will have major droughts and floods.

To do this we can use computer models of Earth’s climate. In our recently published paper in Environmental Research Letters, we evaluated how well a large number of models from around the world simulate the IPO. We found that the models do surprisingly well on some points, but don’t quite simulate the same degree of slow movement (the stubborn behaviour) of El Tío and La Tía that we observe in the real world.

But some climate models are better at simulating El Tío and La Tía. This is useful because it points the way to better models that could be used to understand the next few decades of El Tío, La Tía and climate change.

However, more work needs to be done to predict the next shift in the IPO and climate change. This is the topic of a new set of experiments that are going to be part the next round of climate model comparisons.

With further model development and new observations of the deep ocean available since 2005, scientists will be able to more easily answer some of these important questions.

Whatever the case, cranky old El Tío is waiting just around the corner. His big stick is poised, ready to give us a massive hiding: a swift rise in global temperatures over the coming decades.

And like a big smack, that would be no laughing matter.

The Conversation

Ben Henley receives funding from an ARC Linkage Project and is an associate investigator with the ARC Centre of Excellence for Climate System Science.

Andrew King receives funding from the ARC Centre of Excellence for Climate System Science.

Chris Folland receives funding from the UK Met Office via contract the Joint. BEIS/Defra contract GA1101.

David Karoly receives funding from the Australian Research Council Centre of Excellence for Climate System Science and and ARC Linkage grant. He is a member of the Climate Change Authority and the Wentworth Group of Concerned Scientists.

Mandy Freund receives funding from the ARC Centre of Excellence for Climate System Science.

Jaci Brown does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Three ingredients for running a successful environmental campaign

Fri, 2017-02-03 16:45

Here in Perth, a battle is raging over a 5km stretch of road known as Roe 8. Work on the project, part of the proposed Perth Freight Link, began late in 2016 and as legal avenues to halt construction were exhausted, opponents resorted to non-violent direct action. Some protest “mass actions” have attracted more than 1,000 people from all walks of life and by the end of January, as bulldozers tore through the Coolbellup bushland under costly police protection, well over 100 had been arrested.

Clearing machinery arrives on site under heavy police protection, January 2017. Gnangarra

Proponents say the road is necessary to improve the safety and efficiency of freight traffic to and from the Port of Fremantle. Opponents point to freight alternatives that will avoid Roe 8’s destruction of Aboriginal heritage, endangered banksia woodland, and important wetlands. Critics have also decried the government’s lack of transparency and prudence in decision-making, and highlighted serious shortcomings in environmental policies and laws.

The state’s Labor opposition has promised to scrap the project if it wins government at the state election on March 11, yet to the shock and dismay of many, bulldozing continues.

How will the conflict end? While history provides no sure guide to the future, it does reveal that successful environmental campaigns have tended to share several key features that unsuccessful campaigns have lacked. What are they?

1. Elections

Some of the biggest environmentalist victories have been won at the ballot box. This was the case for the proposed Franklin River dam, which became a federal election issue and helped to bring Bob Hawke’s Labor government to power.

By-elections have also decided the fate of environmentally contentious developments. Wayne Goss’s proposed “Koala tollway” between Brisbane and the Gold Coast cost Labor nine seats in the 1995 state election; a by-election in February 1996 saw the end of both Goss’s majority and the toll road.

Similarly, the campaign against a proposal for agricultural development in Victoria’s Little Desert delivered a shock metropolitan by-election result that, along with sustained public pressure, quashed the proposal.

More recently, the East-West Link toll road in Melbourne was, like Roe 8, hurried into the construction phase before an election with no full business case available for public scrutiny. The campaign against the Link, which united public transport advocates and local councils, ran for more than a year and attracted A$1.6 million in policing costs. Labor promised to halt construction and following his electoral success in November 2015, the incoming premier Daniel Andrews tore up the contracts, setting what might turn out to be a crucial precedent for WA Labor’s Mark McGowan.

Even electoral failures can help environmental causes in the long run. Advocates for Lake Pedder in Tasmania didn’t attract political support for their cause from either major party, so they formed their own: the United Tasmania Group. It narrowly failed to win a seat at the 1972 state election, and Lake Pedder was lost.

But those who were galvanised by this failure were instrumental in the victory 10 years later over the Franklin dam, which transformed federal-state relations and launched the Australian Greens as a political force.

2. Unions

Many past environmental campaigns have succeeded only through union involvement. In the 1970s and ‘80s, almost 50% of the Australian workforce was unionised, giving the unions significant power to shut down contentious projects.

The 1970 campaign against oil drilling on the Great Barrier Reef claimed success when the Transport Workers Union and affiliates placed a black ban on drilling vessels in the region. The 1970s “Green bans”, led by Jack Mundey and the NSW Builders’ Labourers Federation, blocked a range of threats to heritage sites and bushland, including urban bushland at Kelly’s Bush on Sydney’s lower North Shore.

With union membership today at only around 15%, and the environment a low priority for some key unions, this opportunity for intervention has all but vanished.

3. Alternatives

Campaigns are more likely to be successful where environmentalists can point to viable alternatives for the projects they oppose. For example, opponents of woodchipping in East Gippsland in the 1980s produced a report showing how developing agriculture and tourism in parallel with a restructured and modernised timber industry would produce 450 extra jobs in the region.

This material was then used in political lobbying, as well as campaigning in marginal seats, leading to the declaration of the Errinundra Plateau and Rodger River National Parks in 1987. Logging continues, however, in adjacent areas.

Similarly, Citizens Against Route Twenty achieved success in 1990 with an intense media campaign that included an alternative vision for Brisbane’s urban transport.

Back to Roe 8

In sprawling suburban Perth, the track record of opposition to new roads does not inspire much hope for those campaigning against Roe 8. Previous protests against the Kwinana Freeway, the Graham Farmer Freeway and the Farrington Road extension were all more or less futile.

In each case the opponents were deemed to be “anti-progress”, with progress implicitly represented by the construction of new road infrastructure. Similar language pervades the current rhetoric around Roe 8, which is portrayed by supporters as a solution to all the traffic problems of Perth’s southern suburbs.

Sustainable transport advocates take a longer view; for instance, in the alternative plan laid out by Curtin University’s Peter Newman and Cole Hendrigan. This, however, has been rejected by the Barnett government in favour of the Roe Highway extension, which was originally planned for different purposes in the 1950s.

The protest against Roe 8 has two of the three key historical ingredients for success (an election, and a clearly outlined alternative plan). It has also harnessed the new power of social media and drone footage.

Opponents of Roe 8 at the end of an hour-long silent protest in Forrest Place, central Perth, January 2017.

Rarely has direct action clinched an environmental campaign, although there are precedents: protesters’ destruction of felled timber at Terania Creek in 1979 brought an end to logging. Tree-sitting and human barricades bought enough time for political change to halt the Cape Tribulation-Bloomfield Road in Queensland’s Wet Tropics. In Coolbellup numerous lock-ons and tree-sits have delayed works, but time is running out for the wetlands in the path of Roe 8.

After the March 11 election we will know whether the already bulldozed area will be restored, or whether the road will be built. Whatever the outcome, one thing is certain: pressure is building on resources and urban spaces, and the indicators of environmental health are continuing to decline.

This trend makes it ever more likely that our economic and political priorities will find themselves on a collision course with communities seeking to protect their local environments. It seems safe to say that we will see plenty more protests like this in coming years.

The Conversation

Andrea Gaynor is affiliated with The Beeliar Group: Professors for Environmental Responsibility.

Categories: Around The Web

New coal plants wouldn't be clean, and would cost billions in taxpayer subsidies

Thu, 2017-02-02 18:31
Even the cleanest coal plants add millions of tonnes of greenhouse gases to the atmosphere each year. Coal image from www.shutterstock.com

Following a campaign by the coal industry, Prime Minister Malcolm Turnbull has argued for new coal-fired power stations in Australia. But these plants would be more expensive than renewables and carry a huge liability through the carbon emissions they produce.

Major Australian energy companies have ruled out building new coal plants. The Australian Energy Council sees them as “uninvestable”. Banks and investment funds would not touch them with a barge pole. Only government subsidies could do it.

It may seem absurd to spend large amounts of taxpayers’ money on last century’s technology that will be more costly than renewable power and would lock Australia into a high-carbon trajectory.

But the government is raising the possibility of government funding for new coal plants, with statements by Deputy Prime Minister Barnaby Joyce, Treasurer Scott Morrison and Environment and Energy Minister Josh Frydenberg. The suggestion is to use funding from the Clean Energy Finance Corporation. For this to happen, presumably the CEFC’s investment mandate would need to be changed, or the meaning of “low-emissions technologies” interpreted in a radical way.

It should come to nothing, if minimum standards of sensible policy prevailed.

But an ill wind is blowing in Australia’s energy and climate policy debate. The situation in parliament is difficult, and the Trump presidency is giving the right wing in the Coalition a boost.

Definitely not ‘clean’

Proponents of new coal plants call them “clean coal”. They have appropriated a term that normally means burning coal in power stations with carbon capture and storage, a technology that filters out most of the carbon dioxide. But this is expensive and has made little progress.

Turnbull and others are simply suggesting Australia build the latest generation of conventional coal-burning plants. They are not clean – merely marginally less polluting than the old plants running now.

A new high-efficiency coal plant run on black coal would produce about 80% of the emissions of an equivalent old plant. An ultra-supercritical coal plant running on black coal emits about 0.7 tonnes of CO₂ per megawatt hour of electricity, or about 0.85 tonnes using brown coal. That is anything but clean.

For comparison, typical old “dirty” black coal plants in operation now emit around 0.9 tonnes, so the improvement from replacing them with the latest technology is not large. Gas plants produce between 0.4-0.6 tonnes, much less than the suggested new coal plants. Gas has the added benefit of being able to respond flexibly to demand. A plant with carbon capture and storage might emit around 0.05 tonnes, and renewables zero.

The Australian grid average right now is around 0.8 tonnes and gradually falling. New coal would tend to keep that average higher over the long term.

A single typically sized new coal plant could blow out in the order of 5 million tonnes of CO₂ each year – about 1% of Australia’s current annual emissions – and would have an expected lifetime of 40-60 years. It would also pollute the air locally, as all coal plants do, causing damage to people’s health.

If we wanted to make up for the extra coal emissions by doing more in industry, transport or agriculture, then this would come at a cost in those parts of the economy. In-depth research has shown that decarbonisation of Australia’s economy needs to have zero-carbon electricity supply at its core.

What if we don’t care about the climate?

Building coal power plants is expensive. The average lifetime cost of producing power with ultra-super critical plants in Australia is estimated at around A$80 per megawatt-hour. This assumes financing is available at standard interest rates and that the plant runs at high capacity.

Given the risk that the plants will be liable under stricter carbon limits in the future, the financing costs are bound to be higher, probably north of A$100 – and may be as much as A$160. If the plant is not fully utilised, as is already the case for existing coal plants, average costs will be even higher.

By comparison, wind farms now get built at an average cost of A$75 per megawatt-hour, and solar parks at around A$110. Both are expected to come down to perhaps A$50 by 2025. New coal plants take many years to prepare and build, so 2025 is the relevant comparison.

In fact, the overall comparison costs for renewables are even lower. This is because wind and solar built in 2025 would be replaced in the 2050s with even cheaper systems.

There are extra costs associated with wind and solar – for instance, through pumped-hydro storage or more gas-fired power plants to balance supply. But these costs are far less than the underlying cost of renewables.

So renewables including system integration costs will be cheaper than new coal plants, perhaps by quite a margin. Let’s say, very conservatively, that renewables are A$20 per megawatt-hour cheaper. For the coal plant that’d be an extra cost of A$150 million per year, or A$6 billion over 40 years. The extra cost could be much higher if the plant was retired before the 2060s or not run at full capacity.

The subsidy required would be potentially billions of dollars for each plant. That’s billions of dollars from the taxpayer or electricity user, in order to supply power with high carbon emissions that are then locked in for half a century. It should not happen in a country that prides itself on rational economic policy.

Instead, government should set its sights on the long-term economic opportunities for Australia in a low-carbon world, and chart a path for the transition of the energy system.

Turnbull referred to Australia’s position as a coal exporter. But a revolution is under way in energy technologies. While coal will continue to be used in existing plants, the times of growing coal use are over. Already more than 70% of the world’s annual power sector investment goes to renewables.

Australia is lucky in that there are no limits to the amount of renewable energy that could be produced. New industries can be built around it. We should invest in the industries of the future, not sink more money into the technologies of last century.

The Conversation

Frank Jotzo has received research grant funding from a number of sources, mostly government and the Australian Research Council.

Categories: Around The Web

The government is right to fund energy storage: a 100% renewable grid is within reach

Thu, 2017-02-02 15:14
With the right mix, the grid can go fully renewable for the same cost and reliability as fossil fuels. Pixabay/Wikimedia Commons

In a speech to the National Press Club yesterday, Prime Minister Malcolm Turnbull declared that the key requirements for Australia’s electricity system are that it should be affordable, reliable, and able to help meet national emissions-reduction targets. He also stressed that efforts to pursue these goals should be “technology agnostic” – that is, the best solutions should be chosen on merit, regardless of whether they are based on fossil fuels, renewable energy or other technologies.

As it happens, modern wind, solar photovoltaics (PV) and off-river pumped hydro energy storage (PHES) can meet these requirements without heroic assumptions, at a cost that is competitive with fossil fuel power stations.

Turnbull and his government have also correctly identified energy storage as key to supporting high system reliability. Wind and solar are intermittent sources of generation, and while we are getting better at forecasting wind and sunshine on time scales from seconds to weeks, storage is nevertheless necessary to deliver the right balance between supply and demand for high penetration of wind and PV.

Storage becomes important once the variable renewable energy component of electricity production rises above 50%. Australia currently sources about 18% of its electricity from renewables – hydroelectricity in the Snowy Mountains and Tasmania, wind energy and the ever-growing number of rooftop PV installations.

Meanwhile, in South Australia renewable energy is already at around 50% - mostly wind and PV – and so this state now has a potential economic opportunity to add energy storage to the grid.

Pushing storage

To help realise this potential, in South Australia and elsewhere, the Clean Energy Finance Corporation (CEFC) and the Australian Renewable Energy Agency (ARENA) will spend A$20 million of public funds on helping flexible capacity and large-scale energy storage projects become commercially viable, including pumped hydro and batteries.

PHES constitutes 97% of worldwide electricity storage. The retail market for household storage batteries such as Tesla’s Powerwall is growing, but large-scale storage batteries are still much more expensive than PHES. “Off-river” pumped hydro has a bright future in Australia and many other countries, because there are very many suitable sites.

Wind and PV are the overwhelming winners in terms of new low-emissions electricity generation because they cost less than the alternatives. Indeed, PV and wind constituted half of the world’s new generation capacity installed in 2015 and nearly all new generation capacity installed in Australia.

Recently, we modelled the National Electricity Market (NEM) for a 100% renewable energy scenario. In this scenario wind and PV provide 90% of annual electricity, with existing hydro and bioenergy providing the balance. In our modelling, we avoid heroic assumptions about future technology development, by only including technology that has already been deployed in quantities greater than 100 gigawatts – namely wind, PV and PHES.

Reliable, up-to-date pricing is available for these technologies, and our cost estimates are more robust than for models that utilise technology deployment and cost reduction projections that are far different from today’s reality.

In our modelling, we use historical data for wind, sun and demand for every hour of the years 2006-10. Very wide distribution of PV and wind across the network reduces supply shortfalls by taking advantage of different weather systems. Energy balance between supply and demand is maintained by adding sufficient PHES, high-voltage transmission capacity and excess wind and PV capacity.

Not an expensive job

The key outcome of our work is that the extra cost of balancing renewable energy supply with demand on an hourly, rather than annual, basis is modest: A$25-30 per megawatt-hour (MWh). Importantly, this cost is an upper bound, because we have not factored in the use of demand management or batteries to smooth out supply and demand even more.

What’s more, a large fraction of this estimated cost relates to periods of several successive days of overcast and windless weather, which occur only once every few years. We could make substantial further reductions through contractual load shedding, the occasional use of legacy coal and gas generators to charge PHES reservoirs, and managing the charging times of batteries in electric cars.

Using 2016 prices prevailing in Australia, we estimate that the levelised cost of energy in a 100% renewable energy future, including the cost of hourly balancing, is A$93 per MWh. The cost of wind and PV continues to fall rapidly, and so after 2020 this price is likely to be around AU$75 per MWh.

Crucially, this is comparable with the corresponding estimated figure for a new supercritical black coal power station in Australia, which has been put at A$80 per MWh.

Meanwhile, a system developed around wind, PV and PHES and existing hydro can deliver the same reliability as today’s network. PHES can also deliver many of the services that enable a reliable energy system today: excellent inertial energy, spinning reserve, rapid start, black start capability, voltage regulation and frequency control.

Ageing system

Australia’s fossil fuel fleet is ageing. A good example is the pending closure of the 49-year-old Hazelwood brown coal power station in Victoria’s Latrobe Valley. An ACIL Allen report to the Australian Government lists the technical lifetime of each power station, and shows that two-thirds of Australia’s fossil fuel generation capacity will reach the end of its technical lifetime over the next two decades.

The practical choices for replacing these plants are fossil fuels (coal and gas) or existing large-scale renewables (wind and PV). Renewables are already economically competitive, and will be clearly cheaper by 2030.

Energy-related greenhouse gas emissions constitute about 84% of Australia’s total. Electricity generation, land transport, and heating in urban areas comprise 55% of total emissions. Conversion of these three energy functions to renewable energy is easier than for other components of the energy system.

Transport and urban heating can be electrified by deploying electric vehicles and heat pumps, respectively. Electric heat pumps are already providing strong competition for natural gas in the space and water heating markets. Importantly, these devices have large-scale storage in the form of batteries in vehicles, and thermal inertia in water and buildings. Well-integrated adoption of these technology changes will help reduce electricity prices further.

So wind, PV and PHES together yield reliability and affordability to match the current electricity system. In addition, they facilitate deep cuts to emissions at low cost that can go far beyond Australia’s existing climate target.

The Conversation

Andrew Blakers receives funding from the Australian Renewable Energy Agency.

Matthew Stocks receives funding from the Australian Renewable Energy Agency.

Bin Lu does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Pages