The Conversation
New law finally gives voice to the Yarra River's traditional owners
On September 21, the Victorian Parliament delivered a major step forward for Victoria’s traditional owners, by passing the Yarra River Protection (Wilip-gin Birrarung murron) Act 2017. Until now, the Wurundjeri people have had little recognition of their important role in river management and protection, but the new legislation, set to become law by December 1, will give them a voice.
The Act is remarkable because it combines traditional owner knowledge with modern river management expertise, and treats the Yarra as one integrated living natural entity to be protected.
The new law recognises the various connections between the river and its traditional owners. In a first for Victorian state laws, it includes Woi-wurrung language (the language of the Wurundjeri) in both the Act’s title and in its preamble. The phrase Wilip-gin Birrarung murron means “keep the Yarra alive”. Six Wurundjeri elders gave speeches in Parliament in both English and Woi-wurrung to explain the significance of the river and this Act to their people.
The Act also gives an independent voice to the river by way of the Birrarung Council, a statutory advisory body which must have at least two traditional owner representatives on it.
Read more: Three rivers are now legally people, but that’s just the start of looking after them.
Giving legal powers to rivers has become fashionable recently. Aotearoa, New Zealand passed legislation in March to give legal personhood to the Whanganui River, the voice of that river being an independent guardian containing Māori representation.
Within a week of that decision, the Uttarakhand High Court in India ruled that the Ganga and Yamuna Rivers are living entities with legal status, and ordered government officers to assume legal guardianship of the two rivers (although that decision has since been stayed by the Indian Supreme Court).
All of these developments recognise that rivers are indivisible living entities that need protection. But the Victorian legislation differs in that it doesn’t give the Yarra River legal personhood or assign it a legal guardian. The Birrarung Council, although the “independent voice” of the Yarra, will have only advisory status.
Speaking for the silentThe practice of giving legal voice to entities that cannot speak for themselves is not a new one. Children have legal guardians, as do adults who are not in a position to make decisions for themselves. We also give legal status to many non-human entities, such as corporations.
The idea of doing the same for rivers and other natural objects was first suggested back in 1972. In general terms, giving something legal personhood means it can sue or be sued. So a river’s legal guardian can go to court and sue anyone who pollutes or otherwise damages the river. (Theoretically, a river could also be sued, although this has yet to be tested.)
So how will the Yarra River be protected, if it doesn’t have legal personhood or a guardian?
Like the Whanganui River Settlement legislation, the Yarra River Protection Act provides for the development of a strategic plan for the river’s management and protection. This includes a long term community vision, developed through a process of active community participation, that will identify areas for protection. The strategic plan will also be informed by environmental, social, cultural, recreational and management principles.
These Yarra protection principles further enhance the recognition of traditional owner connection to the Yarra River. They highlight Aboriginal cultural values, heritage and knowledge, and the importance of involving traditional owners in policy planning and decision-making.
And the Birrarung Council will have an important role to play. It will provide advice and can advocate for the Yarra River, even if it can’t actually make decisions about its protection, or take people who damage the Yarra River to court.
Importantly, the Council does not have any government representatives sitting on it. Its members are selected by the environment minister for four-year terms and once appointed they can’t be removed unless they’re found to be unfit to hold office (for example, for misconduct or neglect of duty). This makes sure that the Council’s advice to the minister is truly independent.
So, although the new law will not give the Yarra River full legal personhood, it does enshrine a voice for traditional owners in the river’s management and protection – a voice that has been unheard for too long.
Katie O'Bryan is a member of the National Environmental Law Association, Environmental Justice Australia and the Australian Conservation Foundation.
I've always wondered: can animals be left- and right-pawed?
This is an article from I’ve Always Wondered, a series where readers send in questions they’d like an expert to answer. Send your question to alwayswondered@theconversation.edu.au
While watching my cat engaging in yet another battle with my shoelace, I noticed that he seemed mainly to use his left front paw. Do animals have a more dextrous side that they favour for particular tasks, just like humans? – Mike, Perth.
The short answer is: yes they do! Like humans, many animals tend to use one side of the body more than the other. This innate handedness (or footedness) is called behavioural or motor laterality.
The term laterality also refers to the primary use of the left or right hemispheres of the brain. The two halves of the animal brain are not exactly alike, and each hemisphere differs in function and anatomy. In general terms, the left hemisphere controls the right side of the body and the right hemisphere controls the left side.
Laterality is an ancient inherited characteristic and is widespread in the animal kingdom, in both vertebrates and invertebrates. Many competing theories (neurological, biological, genetic, ecological, social and environmental) have been proposed to explain how the phenomenon developed, but it remains largely a mystery.
Animal ‘handedness’Humans tend to be right-handed. Lefties or “southpaws” make up only about 10% of the human population, and more males than females are left-handed.
Great apes show similar handedness patterns to humans. Most chimps, for instance, seem to be right-handed. But not many studies have looked at laterality in non-primate animals.
Read more: Why are most people right-handed? The answer may be in the mouths of our ancestors.
There is some evidence to suggest that dogs and cats can be right- or left-pawed, although the ratio seems to be more evenly split than in humans, and it is unclear whether there are sex differences.
If you’re a pet owner you can do an experiment for yourself. Which paw does your cat or dog lead with when reaching out for something, or to tap open a pet door?
To test your pet dog, you can place a treat-filled Kong toy directly in front of your dog and see which paw he or she uses to hold it to get the food out. A dog may use either paw or both paws.
To test your pet cat, you can set a “food puzzle” by putting a treat inside a glass jar and watching to see which paw your cat uses. Don’t forget to repeat it lots of times and take notes to see whether the effect is real or just random chance!
Don’t forget to repeat the experiment lots of times.Horses also seem to prefer to circle in one direction rather than the other. Meanwhile, one study suggests that kangaroos are almost exclusively lefties, although the neural basis for this is unknown.
Lateralisation and brain functionIn humans, the left hemisphere is mainly associated with analytical processes and language and the right hemisphere with orientation, awareness and musical abilities, although this dichotomy is simplistic at best.
Is there evidence of lateralised brain function in non-human animals too? A team of Italian researchers think so. They found that dogs wag their tails to the right when they see something they want to approach, and to the left when confronted with something they would rather avoid. This suggests that, just as for people, the right and left halves of the brain do different jobs in controlling emotions.
Laterality is also connected to the direction in which hair grows (so-called stuctural laterality), or even to the senses (sensory laterality). Many animals use they left eye and left ear (indicating right brain activation) more often than the right ones when investigating objects that are potentially frightening. However, asymmetries in olfactory processing (nostril use) are less well understood.
Research suggests most kangaroos are southpaws. Ester Inbar/Wikimedia Commons, CC BYThe left or right bias in sensory laterality is separate from that of motor laterality (or handedness). However, some researchers think that side preference is linked to the direction of hair whorls (“cow licks”), which can grow in a clockwise or anticlockwise direction. More right-handed people have a clockwise hair pattern, although it is unclear if this is true of other animals.
The direction of hair growth and handedness are also related to temperament. Left-handed people might be more vulnerable to stress, as are left-pawed dogs and many other animals. In general, many animals, including humans, that have a clockwise hair whorl are less stress-prone than those with anticlockwise hair growth. The position of the hair whorl also matters; cattle and horses with hair whorls directly above the eyes are more typically difficult to handle than those with whorls lower down on the face.
Elsewhere in the animal kingdom, snails also have a form of laterality, despite having a very different nervous system to vertebrates like us. Their shells spiral in either a “right-handed” or “left-handed” direction – a form of physical asymmetry called “chirality”. This chirality is inherited – snails can only mate with matching snails.
Chirality is even seen in plants, depending on the asymmetry of their leaves, and the direction in which they grow.
As an aside, left-handedness has been discriminated against in many cultures for centuries. The Latin word sinistra originally meant “left” but its English descendant “sinister” has taken on meanings of evil or malevolence. The word “right”, meanwhile, connotes correctness, suitability and propriety. Many everyday objects, from scissors to notebooks to can-openers, are designed for right-handed people, and the Latin word for right, dexter, has given us the modern word “dextrous”.
Why is the brain lateralised?One adaptive advantage of lateralisation is that individuals can perform two tasks at the same time if those tasks are governed by opposite brain hemispheres. Another advantage might be resistance to disease – hand preference in animals is associated with differences in immune function, with right-handed animals mounting a better immune response.
Does it matter if your cat, dog, horse or cow favours one paw (or hoof) over another? Determining laterality – or which side of the brain dominates the other – could change the way domestic animals are bred, raised, trained and used, including predicting which puppies will make the best service dogs, and which racehorses will race better on left- or right-curving tracks.
And even if your dog or cat never clutches a pen, or uses one limb more than the other, just be grateful that they haven’t yet developed opposable thumbs!
This article is dedicated to the memory of Bollo the cat, who inspired this question but has since passed away.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.
Can two clean energy targets break the deadlock of energy and climate policy?
Malcolm Turnbull’s government has been wrestling with the prospect of a clean energy target ever since Chief Scientist Alan Finkel recommended it in his review of Australia’s energy system. But economist Ross Garnaut has proposed a path out of the political quagmire: two clean energy targets instead of one.
Garnaut’s proposal is essentially a flexible emissions target that can be adapted to conditions in the electricity market. If electricity prices fail to fall as expected, a more lenient emissions trajectory would likely be pursued.
This proposal is an exercise in political pragmatism. If it can reassure both those who fear that rapid decarbonisation will increase energy prices, and those who argue we must reduce emissions at all costs, it represents a substantial improvement over the current state of deadlock.
Ross Garnaut/Yann Robiou DuPont, Author provided Will two targets increase investor certainty?At a recent Melbourne Economic Forum, Finkel pointed out that investors do not require absolute certainty to invest. After all, it is for accepting risks that they earn returns. If there was no risk to accept there would be no legitimate right to a return.
But Finkel also pointed out that investors value policy certainty and predictability. Without it, they require more handsome returns to compensate for the higher policy risks they have to absorb.
Read more: Turnbull is pursuing ‘energy certainty’ but what does that actually mean?
At first sight, having two possible emissions targets introduces yet another uncertainty (the emissions trajectory). But is that really the case? The industry is keenly aware of the political pressures that affect emissions reduction policy. If heavy reductions cause prices to rise further, there will be pressure to soften the trajectory.
Garnaut’s suggested approach anticipates this political reality and codifies it in a mechanism to determine how emissions trajectories will adjust to future prices. Contrary to first impressions, it increases policy certainty by providing clarity on how emissions policy should respond to conditions in the electricity market. This will promote the sort of policy certainty that the Finkel Review has sought to engender.
Could policymakers accept it?Speaking of political realities, could this double target possibly accrue bipartisan support in a hopelessly divided parliament? Given Tony Abbott’s recent threat to cross the floor to vote against a clean energy target (bringing an unknown number of friends with him), the Coalition government has a strong incentive to find a compromise that both major parties can live with.
Read more: Abbott’s disruption is raising the question: where will it end?
Turnbull and his energy minister, Josh Frydenberg, who we understand are keen to see Finkel’s proposals taken up, could do worse than put this new idea on the table. They have to negotiate with parliamentary colleagues whose primary concern is the impact of household electricity bills on voters, as well as those who won’t accept winding back our emissions targets.
Reassuringly, the government can point to some precedent. Garnaut’s proposal is novel in Australia’s climate policy debate, but is reasonably similar to excise taxes on fuel, which in some countries vary as a function of fuel prices. If fuel prices decline, excise taxes rise, and vice versa. In this way, governments can achieve policy objectives while protecting consumers from the price impacts of those objectives.
The devil’s in the detailOf course, even without the various ideologies and vested interests in this debate, many details would remain to be worked out. How should baseline prices be established? What is the hurdle to justify a more rapid carbon-reduction trajectory? What if prices tick up again, after a more rapid decarbonisation trajectory has been adopted? And what if prices don’t decline from current levels: are we locking ourselves into a low-carbon-reduction trajectory?
These issues will need to be worked through progressively, but there is no obvious flaw that should deter further consideration. The fundamental idea is attractive, and it looks capable of ameliorating concerns that rapid cuts in emissions will lock in higher electricity prices.
For mine, I would not be at all surprised if prices decline sharply as we begin to decarbonise, such is the staggering rate of technology development and cost reductions in renewable energy. But I may of course be wrong. Garnaut’s proposal provides a mechanism to protect consumers if this turns out to be the case.
Bruce Mountain does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Is BHP really about to split from the Minerals Council's hive mind?
Shareholder action has struck again (perhaps). The Australasian Centre for Corporate Responsibility, on behalf of more than 120 shareholders of BHP, has convinced the Big Fella to reconsider its membership of the Minerals Council of Australia.
Business associations and umbrella groups exist to advance the interests of their members. The ones we know most about are those that are in the public eye, lobbying, producing position papers that put forward controversial and unpopular positions (while giving their members plausible deniability), running television adverts, and attacking their opponents as naive idealists at best, or luddites and watermelons (green on the outside, red on the inside) at worst.
Read more: Risky business: how companies are getting smart about climate change.
This has been going on for a century, as readers of the late Alex Carey’s Taking the Risk out of Democracy: Corporate Propaganda versus Freedom and Liberty will know. (Another Australian, Sharon Beder ably continued his work, and more recently yet another Australian, Kerryn Higgs, wrote excellently on this.)
Alongside the gaudy outfits sit lower-profile and occasionally very powerful coordinating groups, such as the Australian Industry Greenhouse Network (see Guy Pearse’s book High and Dry for details).
Ultimately, however, membership of these groups can have costs to companies – beyond the financial ones. If an industry body strays too far from the public mood, individual companies can feel the heat. This happened in the United States in 2002, with the Global Climate Coalition, a front group for automakers and the oil industry that succeeded in defeating the Kyoto Protocol but then outlived its usefulness. It happened again in 2009 when a group of companies (including Nike, Microsoft and Johnson & Johnson) decided their reputations were being damaged by continued membership of the US Chamber of Commerce, which was taking a particularly intransigent line on President Barack Obama’s climate efforts.
Doings Down UnderWhat’s interesting in this latest spat is that it involves two very powerful players. Let’s look at them in turn.
The Minerals Council of Australia (MCA) began life in 1967, as the Australian Mining Industry Council, when Australia’s export boom for coal, iron ore and other commodities was taking off.
From its earliest days it found itself embroiled in both Aboriginal land rights and environmental disputes, having established an environment subcommittee in 1972. Over time, the Council took a robust line on both topics, to put it mildly.
In 1990, at the height of green concerns, the then federal environment minister Ros Kelly offered a scathing assessment of the council, saying that its idea of a sustainable industry was:
…one in which miners can mine where they like, for however long they want. It is about, for them, sustaining profits and increasing access to all parts of Australia they feel could be minerally profitable, even if it is of environmental or cultural significance.
Meanwhile, the council’s intransigent position on Aboriginal land rights, especially after the 1992 Mabo decision, caused it to lose both credibility and – crucially – access to land rights negotiations.
Geoff Allen, a business guru who had created the Business Council of Australia, was called in to write a report, which led the Minerals Council to adopt its present name, and a more emollient tone.
The MCA’s peak of influence (so far?) was its role in the Keep Mining Strong campaign of 2010, which sank Kevin Rudd’s planned super-profits tax. The following year, it combined with other business associations to form the Australian Trade and Industry Alliance, launching an advertising broadside against Julia Gillard’s carbon pricing scheme (which was not, as former Liberal staffer Peta Credlin has now admitted, a “carbon tax”).
Bashing the carbon tax.The MCA has since kept up a steady drumbeat of attacks on renewable energy, and most infamously supplied the (lacquered) lump of coal brandished by Treasurer Scott Morrison in parliament.
Read more: Hashtags v bashtags: a brief history of mining advertisements (and their backlashes).
The most important thing, for present purposes, to understand about the MCA is that it may well have been the subject of a reverse takeover by the now defunct Australian Coal Association. In a fascinating article in 2015, Mike Seccombe pointed out that:
Big as the coalmining industry is in Australia, it accounts for only a bit more than 20% of the value of our mineral exports. Yet now the Minerals Council has come to be dominated by just that one sector, coal… Representatives of the biggest polluters on the planet now run the show.
This brings us to BHP. As a global resources player, with fingers in many more pies than just coal (indeed, it has spun its coal interests off into a company called South32), it has remained phlegmatic about carbon pricing, even as the MCA and others have got into a flap.
Read more: Say what you like about BHP, it didn’t squander the boom.
To BHP, the advent of carbon pricing in Australia was if anything a welcome development. The move offered two main benefits: valuable experience of doing business under carbon pricing, and a chance to influence policy more easily than in bigger, more complex economies.
In 2000, the company’s American chief executive, Paul Anderson, tried to get the Business Council of Australia to discuss ratification of the Kyoto Protocol (which would build pressure for local carbon pricing). He couldn’t get traction. Interviewed in 2007, he recalled:
I held a party and nobody came… They sent some low-level people that almost read from things that had been given to them by their lawyers. Things like, ‘Our company does not acknowledge that carbon dioxide is an issue and, if it is, we’re not the cause of it and we wouldn’t admit to it anyway.’
The schismAs the physicist Niels Bohr said, “prediction is very difficult, especially about the future”. I wouldn’t want to bet on whether BHP will actually go ahead and leave the MCA, or whether the Minerals Council will revise its hostile position on environmental sustainability.
BHP has promised to “make public, by 31 December 2017, a list of the material differences between the positions we hold on climate and energy policy, and the advocacy positions on climate and energy policy taken by industry associations to which we belong”.
In reaching for a metaphor to try and explain the situation, I find myself coming back to an episode of Star Trek: The Next Generation. The heroic crew has captured an individual from the “Borg”, a collective hive-mind entity. They plan to implant an impossible image in its brain, knowing that upon release it will reconnect, shunt the image upwards for the hive mind to try to understand, and thus drive the entire Borg stark raving mad as it tries in vain to compute the information it is receiving.
This analogy is admittedly crude, I’ll grant you. It is, I submit, also a pretty accurate picture of what might happen when an MCA member grows a climate conscience.
Developing countries can prosper without increasing emissions
One of the ironies of fighting climate change is that developed countries – which have benefited from decades or centuries of industrialisation – are now asking developing countries to abandon highly polluting technology.
But as developing countries work hard to grow their economies, there are real opportunities to leapfrog the significant investment in fossil fuel technology typically associated with economic development.
This week, researchers, practitioners and policy makers from around the world are gathered in New York city for the International Conference on Sustainable Development as part of Climate Week. We at ClimateWorks will be putting the spotlight on how developing countries can use low- or zero-emissions alternatives to traditional infrastructure and technology.
Read more: How trade policies can support global efforts to curb climate change
Developing nations are part of climate changeAccording to recent analysis, six of the top 10 emitters of greenhouse gases are now developing countries (this includes China). Developing countries as a bloc already account for about 60% of global annual emissions.
If we are are to achieve the global climate targets of the Paris Agreement, these countries need an alternative path to prosperity. We must decouple economic growth from carbon emissions. In doing so, these nations may avoid many of the environmental, social and economic costs that are the hallmarks of dependence on fossil fuels.
This goal is not as far-fetched as it might seem. ClimateWorks has been working as part of the Deep Decarbonization Pathways Project, a global collaboration of researchers looking for practical ways countries can radically reduce their carbon emissions – while sustaining economic growth.
For example, in conjunction with the Australian National University, we have modelled a deep decarbonisation pathway that shows how Australia could achieve net zero emissions by 2050, while the economy grows by 150%.
Similarly, data compiled by the World Resources Institute shows that 21 countries have reduced annual greenhouse gas emissions while simultaneously growing their economies since 2000. This includes several eastern European countries that have experienced rapid economic growth in the past two decades.
PricewaterhouseCoopers’ Low Carbon Index also found that several G20 countries have reduced the carbon intensity of their economies while maintaining real GDP growth, including nations classified as “developing”, such as China, India, South Africa and Mexico.
‘Clean’ economic growth for sustainable developmentIf humankind is to live sustainably, future economic growth must minimise environmental impact and maximise social development and inclusion. That’s why in 2015, the UN adopted the Sustainable Development Goals: a set of common aims designed to balance human prosperity with protection of our planet by 2030.
These goals include a specific directive to “take urgent action to combat climate change and its impacts”. Likewise, language in the Paris Climate Agreement recognises the needs of developing countries in balancing economic growth and climate change.
The Sustainable Development Goals are interconnected, and drawing these links can provide a compelling rationale for strong climate action. For example, a focus on achieving Goal 7 (Affordable and Clean Energy) that also considers Goal 13 (Climate Action) will prioritise low or zero-emissions energy technologies. This in turn delivers health benefits and saves lives (Goal 3) through improved air quality, which also boosts economic productivity (Goal 8).
Read more: Climate change set to increase air pollution deaths by hundreds of thousands by 2100
Therefore efforts to limit global temperature rise to below 2℃ must be considered within the context of the Sustainable Development Goals. These global goals are intrinsically linked to solving climate change.
But significant barriers prevent developing countries from adopting low-emissions plans and ambitious climate action. Decarbonisation is often not a priority for less developed countries, compared to key issues such as economic growth and poverty alleviation. Many countries struggle with gaps in technical and financial expertise, a lack of resources and inconsistent energy data. More fundamentally, poor governance and highly complex or fragmented decision-making also halt progress.
It’s in the best interest of the entire world to help developing countries navigate these problems. Creating long-term, lowest-emissions strategies, shaped to each country’s unique circumstances, is crucial to maintaining growth while reducing emissions. Addressing these problems is the key to unlocking the financial flows required to move to a just, equitable and environmentally responsible future.
Meg Argyriou does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Politics podcast: AGL chief economist Tim Nelson on what to do with Liddell
In the eye of the storm over energy policy is Liddell, an ageing coal-fired power station owned by energy giant AGL.
Prime Minister Malcolm Turnbull has twisted the arm of AGL chief executive Andy Vesey to take to the company’s board the proposition that it should extend the plant’s life beyond its scheduled 2022 closure, or alternatively sell it to an operator that would carry it on.
AGL chief economist Tim Nelson says the company is running the rule over both options but he argues preserving the power station may not be the best solution. “The decision is not just economic, it is also also a commitment on carbon risk.”
Nelson says the emissions profile of extending the life of coal-fired power stations is inconsistent with current commitments in AGL’s greenhouse gas policy and the government’s undertakings under the Paris climate accord. Add to that the hefty rehabilitation costs for 50-year-old Liddell and it seems “the numbers don’t add up”.
While AGL is reviewing government options, it is so far sticking to its alternatives for the site – repurposing it, or repowering it with zero-emissions technology.
But without a coherent policy framework it is hard to see an orderly transition in the energy market. Nelson says a clean energy target could fix the uncertainty, encouraging the replacement of old technology with a combination of renewables and “complementary capacity from flexible sources”.
Michelle Grattan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Review of historic stock routes may put rare stretches of native plants and animals at risk
Since the 19th century, Australian drovers have moved their livestock along networks of stock routes. Often following traditional Indigenous pathways, these corridors and stepping-stones of remnant vegetation cross the heavily cleared wheat and sheep belt in central New South Wales.
The publicly owned Travelling Stock Reserve network of New South Wales is now under government review, which could see the ownership of much of this crown land move into private hands.
But in a study published today in the Australian Journal of Botany we suggest that privatising stock routes may endanger vital woodlands and put vulnerable species at risk.
Read more: How ancient Aboriginal star maps have shaped Australia’s highway network
The review will establish how individual reserves are currently being used. Although originally established for graziers, the patches of bush in the network are now more likely to be used for recreation, cultural tourism, biodiversity conservation, apiary and drought-relief grazing.
This shift away from simply moving livestock has put pressure on the government to seek “value” in the network. The review will consider proposals from individuals and organisations to buy or acquire long-term leases for particular reserves.
It is likely that most proposals to purchase travelling stock reserves would come from existing agricultural operations.
A precious national resourceTravelling stock reserves across New South Wales represent some of the most intact examples of now-endangered temperate grassy woodland ecosystems.
Our research found that changing the status or use of these reserves could seriously impact these endangered woodlands. They criss-cross highly developed agricultural landscapes, which contain very limited amounts of remnant vegetation (areas where the bush is relatively untouched). Travelling stock reserves are therefore crucially important patches of habitat and resources for native plants and animals.
This isn’t the first time a change in ownership of travelling stock reserves has been flagged. Over the last century, as modern transport meant the reserves were used less and less for traditional droving, pressure to release these areas for conventional agriculture has increased.
Historic stock routes are still used for grazing cattle. Daniel Florance, Author providedTo understand what a change in land tenure might mean to the conservation values of these woodlands, we spent five years monitoring vegetation in stock reserves in comparison to remnant woodlands on private farmland.
We found that travelling stock reserves contained a higher number of native plant species, more native shrubs, and less exotic plants than woodland remnants on private land.
The higher vegetation quality in travelling stock reserves was maintained over the five years, which included both the peak of Australia’s record-breaking Millennium Drought and the heavy rainfall that followed, referred to as the “Big Wet”.
The take-home message was that remnant woodland on public land was typically in better nick than in private hands.
Importantly, other studies have found that this high-quality vegetation is critical for many threatened and vulnerable native animals. For example, eastern yellow robins and black-chinned honeyeaters occur more frequently in places with more shrubs growing below the canopy.
The vulnerable superb parrot also uses travelling stock reserves for habitat. Damian Michael, Author providedThe contrast we saw between woodlands in travelling stock reserves and private land reflects the different ways they’re typically managed. Travelling stock reserves have a history of periodic low-intensity grazing, mostly by cattle, with long rest periods. Woodland on active farms tend to be more intensively grazed, by sheep and cattle, often without any strategic rest periods.
The stock reserves’ futureThe uncertain future of travelling stock reserves casts doubt on the state of biodiversity across New South Wales.
The current review of travelling stock reserves is considering each reserve in isolation. It flies in the face of the belief of many managers, practitioners and researchers that the true value of these reserves is in the integrity of the entire network – that the whole is greater than the sum of its parts.
Travelling stock reserves protect threatened species, allow the movement of wildlife, are seed sources for habitat restoration efforts, and support the ecosystem of adjacent agricultural land. These benefits depend on the quality of the remnant vegetation, which is determined by the grazing regime imposed by who owns and manages the land.
Of course, not all travelling stock reserves are in good condition. Some are subject to high-intensity livestock grazing (for example, under longer-term grazing leases) coupled with a lack of funding to manage and enhance natural values.
Changing the land tenure status of travelling stock reserves risks increasing grazing pressure, which our study suggests would reduce ecosystem quality and decrease their conservation value.
The travelling stock routes are important parts of our ecosystem, our national heritage, and our landscape. They can best be preserved by remaining as public land, so the entire network can be managed sustainably.
This requires adequate funding for the Local Land Services, so they can appropriately manage pest animals, weeds, erosion and illegal firewood harvesting and rubbish dumping.
Travelling stock reserves are more than just The Long Paddock – they are important public land, whose ecological value has been maintained under public control. They should continue to be managed for the public good.
Luke S. O'Loughlin has received funding from the Hermon Slade Foundation and the Holsworth Wildlife Endowment Fund
Damian Michael receives funding from the Australian Government (National Environmental Science Program) and the Murray Local Land Services
David Lindenmayer receives funding from the Australian Research Council, the Australian Government (National Environmental Science Program), the Ian Potter Foundation, the Vincent Fairfax Family Foundation, the Murray Local Land Services and the Riverina Local land Services
Thea O'Loughlin received funding from the Murray Local Land Services.
Want energy storage? Here are 22,000 sites for pumped hydro across Australia
The race is on for storage solutions that can help provide secure, reliable electricity supply as more renewables enter Australia’s electricity grid.
With the support of the Australian Renewable Energy Agency (ARENA), we have identified 22,000 potential pumped hydro energy storage (PHES) sites across all states and territories of Australia. PHES can readily be developed to balance the grid with any amount of solar and wind power, all the way up to 100%, as ageing coal-fired power stations close.
Solar photovoltaics (PV) and wind are now the leading two generation technologies in terms of new capacity installed worldwide each year, with coal in third spot (see below). PV and wind are likely to accelerate away from other generation technologies because of their lower cost, large economies of scale, low greenhouse emissions, and the vast availability of sunshine and wind.
New generation capacity installed worldwide in 2016. ANU/ARENA, Author providedAlthough PV and wind are variable energy resources, the approaches to support them to achieve a reliable 100% renewable electricity grid are straightforward:
Energy storage in the form of pumped hydro energy storage (PHES) and batteries, coupled with demand management; and
Strong interconnection of the electricity grid between states using high-voltage power lines spanning long distances (in the case of the National Electricity Market, from North Queensland to South Australia). This allows wind and PV generation to access a wide range of weather, climate and demand patterns, greatly reducing the amount of storage needed.
PHES accounts for 97% of energy storage worldwide because it is the cheapest form of large-scale energy storage, with an operational lifetime of 50 years or more. Most existing PHES systems require dams located in river valleys. However, off-river PHES has vast potential.
Read more: How pushing water uphill can solve our renewable energy issues.
Off-river PHES requires pairs of modestly sized reservoirs at different altitudes, typically with an area of 10 to 100 hectares. The reservoirs are joined by a pipe with a pump and turbine. Water is pumped uphill when electricity generation is plentiful; then, when generation tails off, electricity can be dispatched on demand by releasing the stored water downhill through the turbine. Off-river PHES typically delivers maximum power for between five and 25 hours, depending on the size of the reservoirs.
Most of the potential PHES sites we have identified in Australia are off-river. All 22,000 of them are outside national parks and urban areas.
The locations of these sites are shown below. Each site has between 1 gigawatt-hour (GWh) and 300GWh of storage potential. To put this in perspective, our earlier research showed that Australia needs just 450GWh of storage capacity (and 20GW of generation power) spread across a few dozen sites to support a 100% renewable electricity system.
In other words, Australia has so many good sites for PHES that only the best 0.1% of them will be needed. Developers can afford to be choosy with this significant oversupply of sites.
Pumped hydro sites in Australia. ANU/ARENA, Author providedHere is a state-by-state breakdown of sites (detailed maps of sites, images and information can be found here):
NSW/ACT: Thousands of sites scattered over the eastern third of the state
Victoria: Thousands of sites scattered over the eastern half of the state
Tasmania: Thousands of sites scattered throughout the state outside national parks
Queensland: Thousands of sites along the Great Dividing Range within 200km of the coast, including hundreds in the vicinity of the many wind and PV farms currently being constructed in the state
South Australia: Moderate number of sites, mostly in the hills east of Port Pirie and Port Augusta
Western Australia: Concentrations of sites in the east Kimberley (around Lake Argyle), the Pilbara and the Southwest; some are near mining sites including Kalgoorlie. Fewer large hills than other states, and so the minimum height difference has been set at 200m rather than 300m.
Northern Territory: Many sites about 300km south-southwest of Darwin; a few sites within 200km of Darwin; many good sites in the vicinity of Alice Springs. Minimum height difference also set at 200m.
The maps below show synthetic Google Earth images for potential upper reservoirs in two site-rich regions (more details on the site search are available here). There are many similarly site-rich regions across Australia. The larger reservoirs shown in each image are of such a scale that only about a dozen of similar size distributed across the populated regions of Australia would be required to stabilise a 100% renewable electricity system.
Araluen Valley near Canberra. At most, one of the sites shown would be developed. ANU/ARENA, Author provided Townsville, Queensland. At most, one of the sites shown would be developed. ANU/ARENA, Author providedThe chart below shows the largest identified off-river PHES site in each state in terms of energy storage potential. Also shown for comparison are the Tesla battery and the solar thermal systems to be installed in South Australia, and the proposed Snowy 2.0 system.
Largest identified off-river PHES sites in each state, together with other storage systems for comparison. ANU/ARENA, Author providedThe map below shows the location of PHES sites in Queensland together with PV and wind farms currently in an advanced stage of development, as well as the location of the Galilee coal prospect. It is clear that developers of PV and wind farms will be able to find a PHES site close by if needed for grid balancing.
Solar PV (yellow) and wind (green) farms currently in an advanced stage of development in Queensland, together with the Galilee coal prospect (black) and potential PHES sites (blue). ANU/ARENA, Author providedAnnual water requirements of a PHES-supported 100% renewable electricity grid would be less than one third that of the current fossil fuel system, because wind and PV do not require cooling water. About 3,600ha of PHES reservoir is required to support a 100% renewable electricity grid for Australia, which is 0.0005% of Australia’s land area, and far smaller than the area of existing water storages.
PHES, batteries and demand management are all likely to have prominent roles as the grid transitions to 50-100% renewable energy. Currently, about 3GW per year of wind and PV are being installed. If this continued until 2030 it would be enough to supply half of Australia’s electricity consumption. If this rate is doubled then Australia will reach 100% renewable electricity in about 2033.
Fast-track development of a few excellent PHES sites can be completed in 2022 to balance the grid when Liddell and other coal-fired power stations close.
Andrew Blakers receives funding from the Australian Renewable Energy Agency
Matthew Stocks receives funding from the Australian Renewable Energy Agency for R&D projects on solar photovoltaics and integration of renewable energy. He owns shares in Origin Energy.
Bin Lu does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
More than 1,200 scientists urge rethink on Australia's marine park plans
The following is a statement from the Ocean Science Council of Australia, an internationally recognised independent group of university-based Australian marine researchers, and signed by 1,286 researchers from 45 countries and jurisdictions, in response to the federal government’s draft marine parks plans.
We, the undersigned scientists, are deeply concerned about the future of the Australian Marine Parks Network and the apparent abandoning of science-based policy by the Australian government.
On July 21, 2017, the Australian government released draft management plans that recommend how the Marine Parks Network should be managed. These plans are deeply flawed from a science perspective.
Of particular concern to scientists is the government’s proposal to significantly reduce high-level or “no-take” protection (Marine National Park Zone IUCN II), replacing it with partial protection (Habitat Protection Zone IUCN IV), the benefits of which are at best modest but more generally have been shown to be inadequate.
Read more: Australia’s new marine parks plan is a case of the emperor’s new clothes.
The 2012 expansion of Australia’s Marine Parks Network was a major step forward in the conservation of marine biodiversity, providing protection to habitats and ecological processes critical to marine life. However, there were flaws in the location of the parks and their planned protection levels, with barely 3% of the continental shelf, the area subject to greatest human use, afforded high-level protection status, and most of that of residual importance to biodiversity.
The government’s 2013 Review of the Australian Marine Parks Network had the potential to address these flaws and strengthen protection. However, the draft management plans have proposed severe reductions in high-level protection of almost 400,000 square kilometres – that is, 46% of the high-level protection in the marine parks established in 2012.
Commercial fishing would be allowed in 80% of the waters within the marine parks, including activities assessed by the government’s own risk assessments as incompatible with conservation. Recreational fishing would occur in 97% of Commonwealth waters up to 100km from the coast, ignoring the evidence documenting the negative impacts of recreational fishing on biodiversity outcomes.
Under the draft plans:
The Coral Sea Marine Park, which links the iconic Great Barrier Reef Marine Park to the waters of New Caledonia’s Exclusive Economic Zone (also under consideration for protection), has had its Marine National Park Zones (IUCN II) reduced in area by approximately 53% (see map below)
Six of the largest marine parks have had the area of their Marine National Park Zones IUCN II reduced by between 42% and 73%
Two marine parks have been entirely stripped of any high-level protection, leaving 16 of the 44 marine parks created in 2012 without any form of Marine National Park IUCN II protection.
The replacement of high-level protection with partial protection is not supported by science. The government’s own economic analyses also indicate that such a reduction in protection offers little more than marginal economic benefits to a very small number of commercial fishery licence-holders.
Retrograde stepThis retrograde step by Australia’s government is a matter of both national and international significance. Australia has been a world leader in marine conservation for decades, beginning with the establishment of the Great Barrier Reef Marine Park in the 1970s and its expanded protection in 2004.
At a time when oceans are under increasing pressure from overexploitation, climate change, industrialisation, and plastics and other forms of pollution, building resilience through highly protected Marine National Park IUCN II Zones is well supported by decades of science. This research documents how high-level protection conserves biodiversity, enhances fisheries and assists ecosystem recovery, serving as essential reference areas against which areas that are subject to human activity can be compared to assess impact.
The establishment of a strong backbone of high-level protection within Marine National Park Zones throughout Australia’s Exclusive Economic Zone would be a scientifically based contribution to the protection of intact marine ecosystems globally. Such protection is consistent with the move by many countries, including Chile, France, Kiribati, New Zealand, Russia, the UK and US to establish very large no-take marine reserves. In stark contrast, the implementation of the government’s draft management plans would see Australia become the first nation to retreat on ocean protection.
Australia’s oceans are a global asset, spanning tropical, temperate and Antarctic waters. They support six of the seven known species of marine turtles and more than half of the world’s whale and dolphin species. Australia’s oceans are home to more than 20% of the world’s fish species and are a hotspot of marine endemism. By properly protecting them, Australia will be supporting the maintenance of our global ocean heritage.
The finalisation of the Marine Parks Network remains a remarkable opportunity for the Australian government to strengthen the levels of Marine National Park Zone IUCN II protection and to do so on the back of strong evidence. In contrast, implementation of the government’s retrograde draft management plans undermines ocean resilience and would allow damaging activities to proceed in the absence of proof of impact, ignoring the fact that a lack of evidence does not mean a lack of impact. These draft plans deny the science-based evidence.
We encourage the Australian government to increase the number and area of Marine National Park IUCN II Zones, building on the large body of science that supports such decision-making. This means achieving a target of at least 30% of each marine habitat in these zones, which is supported by Australian and international marine scientists and affirmed by the 2014 World Parks Congress in Sydney and the IUCN Members Assembly at the 2016 World Conservation Congress in Hawaii.
You can read a fully referenced version of the science statement here, and see the list of signatories here.
Jessica Meeuwig does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
A cleanish energy target gets us nowhere
It seems that the one certainty about any clean energy target set by the present government is that it will not drive sufficient progress towards a clean, affordable, reliable energy future. At best, it will provide a safety net to ensure that some cleanish energy supply capacity is built.
Future federal governments will have to expand or complement any target set by this government, which is compromised by its need to pander to its rump. So a cleanish energy target will not provide investment certainty for a carbon-emitting power station unless extraordinary guarantees are provided. These would inevitably be challenged in parliament and in the courts.
Read more: Turnbull is pursuing ‘energy certainty’ but what does that actually mean?
Even then, the unstoppable evolution of our energy system would leave an inflexible baseload power station without a market for much of the electricity it could generate. Instead, we must rely on a cluster of other strategies to do the heavy lifting of driving our energy market forward.
The path forwardIt’s clear that consumers large and small are increasingly investing “behind the meter” in renewable energy technology, smart management systems, energy efficiency and energy storage. In so doing, they are buying insurance against future uncertainty, capturing financial benefits, and reducing their climate impacts. They are being helped by a wide range of emerging businesses and new business models, and existing energy businesses that want to survive as the energy revolution rolls on.
The Australian Energy Market Operator (AEMO) is providing critically important information on what’s needed to deliver energy objectives. The recently established Energy Security Board will work to make sure that what’s needed is done – in one way or another. Other recommendations from the Finkel Review are also helping to stabilise the electricity situation.
The recent AEMO/ARENA demand response project and various state-level energy efficiency retailer obligation schemes and renewable energy targets are examples of how important energy solutions can be driven outside the formal National Energy Market. They can bypass the snail-paced progress of reforming the NEM.
States will play a key roleState governments are setting their own renewable energy targets, based on the successful ACT government “contracts for difference” approach, discussed below. Victoria has even employed the architect of the ACT scheme, Simon Corbell. Local governments, groups of businesses and communities are developing consortia to invest in clean energy solutions using similar models.
Some see state-level actions as undermining the national approach and increasing uncertainty. I see them as examples of our multi-layered democratic system at work. Failure at one level provokes action at another.
State-level actions also reflect increasing energy diversity, and the increasing focus on distributed energy solutions. States recognise that they carry responsibilities for energy: indeed, the federal government often tries to blame states for energy failures.
There is increasing action at the network, retail and behind-the-meter levels, driven by business and communities. While national coordination is often desirable, mechanisms other than national government leadership can work to complement national action, to the extent it occurs.
Broader application of the ACT financing modelA key tool will be a shift away from the current RET model to the broader use of variations of the ACT’s contract for difference approach. The present RET model means that project developers depend on both the wholesale electricity price and the price of Large Generation Certificates (LGCs) for revenue. These are increasingly volatile and, over the long term, uncertain. In the past we have seen political interference and low RET targets drive “boom and bust” outcomes.
So, under the present RET model, any project developer faces significant risk, which makes financing more difficult and costly.
The ACT contract for difference approach applies a “market” approach by using a reverse auction, in which rival bidders compete to offer the desired service at lowest cost. It then locks in a stable price for the winners over an agreed period of time.
The approach reduces risk for the project developer, which cuts financing costs. It shifts cost risk (and opportunity) to whoever commits to buy the electricity or other service. The downside risk is fairly small when compared with the insurance of a long-term contract and the opportunity to capture savings if wholesale electricity prices increase.
The ACT government has benefited from this scheme as wholesale prices have risen. It also includes other requirements such as the creation of local jobs. This approach can be applied by agents other than governments, such as the consortium set up by the City of Melbourne.
For business and public sector consumers, the prospect of reasonably stable energy prices, with scope to benefit if wholesale prices rise and limited downside risk, is attractive in a time of uncertainty. For project developers, a stable long-term revenue stream improves project viability.
The approach can also potentially be applied to other aspects of energy service provision, such as demand response, grid stabilisation or energy efficiency. It can also be combined with the traditional “power purchase agreement” model, where the buyer of the energy guarantees a fixed price but the project developer carries the risk and opportunity of market price variations. It can also apply to part of a project’s output, to underpin it.
While sorting out wholesale markets is important, we need to remember that this is just part of the energy bill. Energy waste, network operations, retailing and pricing structures such as high fixed charges must also be addressed. Some useful steps are being taken, but much more work is needed.
DisclosureAlan Pears has worked for government, business, industry associations public interest groups and at universities on energy efficiency, climate response and sustainability issues since the late 1970s. He is now an honorary Senior Industry Fellow at RMIT University and a consultant, as well as an adviser to a range of industry associations and public interest groups. His investments in managed funds include firms that benefit from growth in clean energy. He has shares in Hepburn Wind.
Vietnam's typhoon disaster highlights the plight of its poorest people
Six people lost their lives when Typhoon Doksuri smashed into central Vietnam on September 16, the most powerful storm in a decade to hit the country.
Although widespread evacuations prevented a higher death toll, the impact on the region’s most vulnerable people will be extensive and lasting.
Read more: Typhoon Haiyan: a perfect storm of corruption and neglect.
Government sources report that more than 193,000 properties have been damaged, including 11,000 that were flooded. The storm also caused widespread damage to farmland, roads, and water and electricity infrastructure. Quang Binh and Ha Tinh provinces bore the brunt of the damage.
Central Vietnam is often in the path of tropical storms and depressions that form in the East Sea, which can intensify to form tropical cyclones known as typhoons (the Pacific equivalent of an Atlantic hurricane).
Typhoon Doksuri developed and tracked exactly as forecast, meaning that evacuations were relatively effective in saving lives. What’s more, the storm moved quickly over the affected area, delivering only 200-300 mm of rainfall and sparing the region the severe flooding now being experienced in Thailand.
Doksuri is just one of a spate of severe tropical cyclones that have formed in recent weeks, in both the Pacific and Atlantic regions. Hurricanes Harvey, Irma and, most recently, Maria have attracted global media coverage, much of it focused on rarely considered angles such as urban planning, poverty, poor development, politics, the media coverage of disasters – as well as the perennial question of climate change.
Disasters are finally being talked about as part of a discourse of systemic oppression - and this is a great step forward.
Vietnam’s vulnerabilityIn Vietnam, the root causes of disasters exist below the surface. The focus remains on the natural hazards that trigger disasters, rather than on the vulnerable conditions in which many people are forced to live.
Unfortunately, the limited national disaster data in Vietnam does not allow an extensive analysis of risk. Our research in central Vietnam is working towards filling this gap and the development of more comprehensive flood mitigation measures.
Central Vietnam has a long and exposed coastline. It consists of 14 coastal provinces and five provinces in the Central Highlands. The Truong Son mountain range rises to the west and the plains that stretch to the coast are fragmented and narrow. River systems are dense, short and steep, with rapid flows.
These physical characteristics often combine with widespread human vulnerability, to deadly effect. We can see this in the impact of Typhoon Doksuri, but also to a lesser extent in the region’s annual floods.
Flood risk map by province using Multi-Criteria Decision-Making method and the national disaster database. Author providedRapid population growth, industrial development and agricultural expansion have all increased flood risk, especially in Vietnam’s riverine and coastal areas. Socially marginalised people often have to live in the most flood-prone places, sometimes as a result of forced displacement.
Floods and storms therefore have a disproportionately large effect on poorer communities. Most people in central Vietnam depend on their natural environment for their livelihood, and a disaster like Doksuri can bring lasting suffering to a region where 30-50% of people are already in poverty.
When disaster does strike, marginalised groups face even more difficulty because they typically lack access to public resources such as emergency relief and insurance.
The rural poor will be particularly vulnerable after this storm. Affected households have received limited financial support from the local government, and many will depend entirely on charity for their recovery.
Better research, less bureaucracyThis is not to say that Vietnam’s government did not mount a significant effect to prepare and respond to Typhoon Doksuri. But typically for Vietnam, where only the highest levels of government are trusted with important decisions, the response was bureaucratic and centralised.
This approach can overlook the input of qualified experts, and lead to decisions being taken without enough data about disaster risk.
Our research has generated a more detailed picture of disaster risk (focused on flood hazard) in the region. We have looked beyond historical loss statistics and collected data on hazards, exposure and vulnerability in Quang Nam province.
Left: flooding hazard map for Quang Nam province. Right: risk of flooding impacts on residents, calculated on the basis of flood hazards from the left map, plus people’s exposure and vulnerability. Author providedOur findings show that much more accurate, sensitive and targeted flood protection is possible. The challenge is to provide it on a much wider scale, particularly in poor regions of the world.
Reduce risk, and avoid creating new riskAn effective risk management approach can help to reduce the impacts of flooding in central Vietnam. Before a disaster ever materialises, we can work to reduce risk - and avoid activities that exacerbate it - for example land grabbing for development, displacing the poor, environmental degradation, discrimination against minorities.
Read more: Irma and Harvey: very different storms, but both affected by climate change.
It is critical that subject experts, particularly scientists, are involved in decisions about disaster risk - in Vietnam and around the world. There must be a shift to more proactive approaches, guided by deep knowledge both of the local context and of the latest scientific advances.
Our maps will help planners and politicians to recognise high-risk areas, prepare flood risk plans, and set priorities for both flood defences and responses to vulnerability. The maps are also valuable tools for communication.
But at the same time as emphasising data-driven decisions, we also need to advocate for a humanising approach in dealing with some of the most oppressed, marginalised, poor and disadvantaged members of the global community.
Jason von Meding receives funding from the Australian government and Save the Children for collaborative projects in Vietnam.
Chinh Luu does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Keeping global warming to 1.5 degrees: really hard, but not impossible
The Paris climate agreement has two aims: “holding the increase in global average temperature to well below 2℃ above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5℃”. The more ambitious of these is not yet out of reach, according to our new research.
Despite previous suggestions that this goal may be a lost cause, our calculations suggest that staying below 1.5℃ looks scientifically feasible, if extremely challenging.
Read more: What is a pre-industrial climate and why does it matter?.
Climate targets such as the 1.5℃ and 2℃ goals have been interpreted in various ways. In practice, however, these targets are probably best seen as focal points for negotiations, providing a common basis for action.
To develop policies capable of hitting these targets, we need to know the size of the “carbon budget” – the total amount of greenhouse emissions consistent with a particular temperature target. Armed with this knowledge, governments can set policies designed to reduce emissions by the corresponding amount.
In a study published in Nature Geoscience, we and our international colleagues present a new estimate of how much carbon budget is left if we want to remain below 1.5℃ of global warming relative to pre-industrial temperatures (bearing in mind that we are already at around 0.9℃ for the present decade).
We calculate that by limiting total CO₂ emissions from the beginning of 2015 to around 880 billion tonnes of CO₂ (240 billion tonnes of carbon), we would give ourselves a two-in-three chance of holding warming to less than 0.6℃ above the present decade. This may sound a lot, but to put it in context, if CO₂ emissions were to continue to increase along current trends, even this new budget would be exhausted in less than 20 years 1.5℃ (see Climate Clock). This budget is consistent with the 1.5℃ goal, given the warming that humans have already caused, and is substantially greater than the budgets previously inferred from the 5th Assessment Report of the Intergovernmental Panel on Climate Change (IPCC), released in 2013-14.
This does not mean that the IPCC got it wrong. Having predated the Paris Agreement, the IPCC report included very little analysis of the 1.5℃ target, which only became a political option during the Paris negotiations themselves. The IPCC did not develop a thorough estimate of carbon budgets consistent with 1.5℃, for the simple reason that nobody had asked them to.
The new study contains a far more comprehensive analysis of the factors that help to determine carbon budgets, such as model-data comparisons, the treatment of non-CO₂ gases, and the issue of the maximum rates at which emissions can feasibly be reduced.
Tough taskThe emissions reductions required to stay within this budget remain extremely challenging. CO₂ emissions would need to decline by 4-6% per year for several decades. There are precedents for this, but not happy ones: these kinds of declines have historically been seen in events such as the Great Depression, the years following World War II, and during the collapse of the Soviet Union – and even these episodes were relatively brief.
Yet it would be wrong to conclude that greenhouse emissions can only plummet during times of economic collapse and human misery. Really, there is no historical analogy to show how rapidly human societies can rise to this challenge, because there is also no analogy for the matrix of problems (and opportunities) posed by climate change.
There are several optimistic signs that peak emissions may be near. From 2000 to 2013 global emissions climbed sharply, largely because of China’s rapid development. But global emissions may now have plateaued, and given the problems that China encountered with pollution it is unlikely that other nations will attempt to follow the same path. Rapid reduction in the price of solar and wind energy has also led to substantial increases in renewable energy capacity, which also offers hope for future emissions trajectories.
In fact, we do not really know how fast we can decarbonise an economy while improving human lives, because so far we haven’t tried very hard to find out. Politically, climate change is an “aggregate efforts global public good”, which basically means everyone needs to pull together to be successful.
This is hard. The problem with climate diplomacy (and the reason it took so long to broker a global agreement) is that the incentives for nations to tackle climate change are collectively strong but individually weak.
Read more: Paris climate targets aren’t enough but we can close the gap.
This is, unfortunately, the nature of the problem. But our research suggests that a 1.5℃ world, dismissed in some quarters as a pipe dream, remains physically possible.
Whether it is politically possible depends on the interplay between technology, economics, and politics. For the world to achieve its most ambitious climate aspiration, countries need to set stronger climate pledges for 2030, and then keep making deep emissions cut for decades.
No one is saying it will be easy. But our calculations suggest that it can be done.
Dave Frame receives funding from the Deep South National Science Challenge and Victoria University of Wellington.
H. Damon Matthews receives funding from the Natural Science and Engineering Research Council of Canada.
Curious Kids: What happens if a venomous snake bites another snake of the same species?
This is an article from Curious Kids, a series for children. The Conversation is asking kids to send in questions they’d like an expert to answer. All questions are welcome – serious, weird or wacky!
If a lethally poisonous snake bites another lethally poisonous snake of the same species does the bitten snake suffer healthwise or die? – Ella, age 10, Wagga Wagga.
Hi Ella,
That’s a great question.
If a venomous snake is bitten by another venomous snake of the same species, (for example during a fight or mating), then it will not be affected.
However, if a snake is bitten by a venomous snake of another species, it probably will be affected.
This is probably because snakes have evolved to be immune to venom from their own species, because bites from mates or rivals of the same species probably happen fairly often.
But a snake being regularly bitten by another snake from a different species? It’s unlikely that would happen very often, so snakes haven’t really had a chance to develop immunity to venom from other species.
Read more: Guam’s forests are being slowly killed off – by a snake
Scientists often collect venom from snakes to create anti-venoms. Kalyan Varma/Wikimedia Snakes can break down venom in the stomachMany people believe that snakes are immune to their own venom so that they don’t get harmed when eating an animal it has just injected full of venom.
But in fact, they don’t need to be immune. Scientists have found that special digestive chemicals in the stomachs of most vertebrates (animals with backbones) break down snake venom very quickly. So the snake’s stomach can quickly deal with the venom in the animal it just ate before it has a chance to harm the snake.
People that have snakes as pets often see this. If one venomous snake bites a mouse and injects venom into it, for example, you can then feed that same dead mouse to another snake. The second snake won’t die.
Read more: Curious Kids: How do snakes make an ‘sssssss’ sound with their tongue poking out?
The eastern brown snake, which is found in Australia, is one of the most venomous snakes in the world. Flickr/Justin Otto, CC BY The difference between venom and poisonBy the way, scientists usually use the word “venomous” rather than “poisonous” when they’re talking about snakes. Many people often mix those words up. Poisons need to be ingested or swallowed to be dangerous, while venoms need to be injected via a bite or a sting.
Some snakes can inject their toxins into their prey, which makes them venomous. However, there seem to be a couple of snake species that eat frogs and can store the toxins from the frogs in their body. This makes them poisonous if the snake’s body is eaten. Over time, many other animals will have learned that it is not safe to eat those snakes, so this trick helps keep them safe.
Hello, curious kids! Have you got a question you’d like an expert to answer? Ask an adult to send your question to us. You can:
* Email your question to curiouskids@theconversation.edu.au
* Tell us on Twitter by tagging @ConversationEDU with the hashtag #curiouskids, or
* Tell us on Facebook
Please tell us your name, age and which city you live in. You can send an audio recording of your question too, if you want. Send as many questions as you like! We won’t be able to answer every question but we will do our best.
Jamie Seymour does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Bacterial baggage: how humans are spreading germs all over the globe
Humans are transporting trillions of bacteria around the world via tourism, food and shipping, without stopping to think about the potential damage being caused to bacterial ecosystems.
When we think about endangered species, we typically think of charismatic mammals such as whales, tigers or pandas. But the roughly 5,500 mammal species on Earth is a relatively paltry number – and it pales in comparison with bacteria, of which there are at least a million different species.
Despite their vast numbers, little research has been done to understand the impact that modern human practices have on these tiny organisms, which have an important influence on many facets of our lives.
Read more: Microbes: the tiny sentinels that can help us diagnose sick oceans.
In an article published today in Science, my colleagues and I explore how humans move bacteria around the globe, and what this might mean for our own welfare.
Human effects on our planet are so profound that we have entered a new geological age, called the Anthropocene. One of the key features of this new world is the way we affect other organisms. We have altered the distribution of animals and plants, creating problems with feral animals, weeds and other invasive species. We have caused many species to decline so much that they have gone extinct.
There are also grounds for concern over the way humans are affecting bacterial species, and in many cases we are causing the same type of problems that affect larger organisms.
Bacterial population structures are definitely changing, and bacterial species are being transported to new locations, becoming the microbial equivalent of weeds or feral animals. Perhaps some bacteria are even on their way to extinction, although we don’t really have enough information to be certain yet.
How do they get around?Let’s start by talking about sewage and manure. Animal and human faeces release gut microorganisms back into the environment, and these organisms are vastly different from the organisms that would have been released 100 years ago. This is because humans and our domesticated animals – cows, sheep, goats, pigs and chickens – now comprise 35 times more biomass than all the wild mammals on land.
Human sewage and livestock manure contain very specific subsets of microbes, meaning those populations are enriched and replenished in the environment, at the expense of the native microbes. Sewage and manure also distribute enormous quantities of genes that confer resistance to antibiotics and disinfectants.
Waste water, sewage sludge and manure are used extensively in agriculture. So gut organisms from humans and agricultural animals go on to contaminate foodstuffs. These food products, along with their bacteria, are then shipped around the world.
Then there are the 1.2 billion international tourist movements per year, which also unintentionally transport gut microorganisms to exotic locations. For instance, tourism can rapidly spread antibiotic resistant pathogens between continents.
It’s not just humans and their food that cause concern – there are also vast quantities of microbe-laden materials that move along with us. Each year, roughly 100 million tonnes of ballast water are discharged from ships in US ports alone. This movement of microorganisms via shipping is changing the distribution of bacteria in the oceans. It also transports pathogens such as cholera.
Humans also move vast quantities of sand, soil and rock. It may seem hard to believe, but it is estimated that human activities are now responsible for moving more soil than all natural processes combined. As every gram of soil contains roughly a billion bacteria,this amounts to huge numbers of microorganisms being moved around the planet.
The falloutWhy should we care if bacteria are being spread to new places? Besides the obvious potential for spreading diseases to humans, animals and crops, there are also hidden dangers.
Microorganisms are invisible to the naked eye, so we tend to ignore them and don’t necessarily appreciate their role in how the planet operates. Bacteria are crucial to biogeochemistry – the cycling of nutrients and other chemicals through ecosystems.
Read more: Your microbiome is shared with your family… including your pets.
For instance, before humans invented a way to make fertiliser industrially, every single nitrogen atom in our proteins and DNA had to be chemically captured by a bacterial cell before it could be taken up by plants and then enter the human food chain. The oxygen we breathe is largely made by photosynthetic microorganisms in the oceans (and not mainly by rainforests, as is commonly believed).
Our effects on bacteria have the potential to alter these fundamental bacterial functions. It is vital to gain a better understanding of how humans are affecting microbes’ distribution, their abundance, and their life-sustaining processes. Although bacteria are invisible, we overlook them at our peril.
Michael Gillings receives funding from the Australian Research Council
After 30 years of the Montreal Protocol, the ozone layer is gradually healing
This weekend marks the 30th birthday of the Montreal Protocol, often dubbed the world’s most successful environmental agreement. The treaty, signed on September 16, 1987, is slowly but surely reversing the damage caused to the ozone layer by industrial gases such as chlorofluorocarbons (CFCs).
Each year, during the southern spring, a hole appears in the ozone layer above Antarctica. This is due to the extremely cold temperatures in the winter stratosphere (above 10km altitude) that allow byproducts of CFCs and related gases to be converted into forms that destroy ozone when the sunlight returns in spring.
As ozone-destroying gases are phased out, the annual ozone hole is generally getting smaller – a rare success story for international environmentalism.
Back in 2012, our Saving the Ozone series marked the Montreal Protocol’s silver jubilee and reflected on its success. But how has the ozone hole fared in the five years since?
Read more: What is the Antarctic ozone hole and how is it made?.
The Antarctic ozone hole has continued to appear each spring, as it has since the late 1970s. This is expected, as levels of the ozone-destroying halocarbon gases controlled by the Montreal Protocol are still relatively high. The figure below shows that concentrations of these human-made substances over Antarctica have fallen by 14% since their peak in about 2000.
Past and predicted levels of controlled gases in the Antarctic atmosphere, quoted as equivalent effective stratospheric chlorine (EESC) levels, a measure of their contribution to stratospheric ozone depletion. Paul Krummel/CSIRO, Author providedIt typically takes a few decades for these gases to cycle between the lower atmosphere and the stratosphere, and then ultimately to disappear. The most recent official assessment, released in 2014, predicted that it will take 30-40 years for the Antarctic ozone hole to shrink to the size it was in 1980.
Signs of recoveryMonitoring the ozone hole’s gradual recovery is made more complicated by variations in atmospheric temperatures and winds, and the amount of microscopic particles called aerosols in the stratosphere. In any given year these can make the ozone hole bigger or smaller than we might expect purely on the basis of halocarbon concentrations.
Launching an ozone-measuring balloon from Australia’s Davis Research Station in Antarctica. Barry Becker/BOM/AAD, Author providedThe 2014 assessment indicated that the size of the ozone hole varied more during the 2000s than during the 1990s. While this might suggest it has become harder to detect the healing effects of the Montreal Protocol, we can nevertheless tease out recent ozone trends with the help of sophisticated atmospheric chemistry models.
Reassuringly, a recent study showed that the size of the ozone hole each September has shrunk overall since the turn of the century, and that more than half of this shrinking trend is consistent with reductions in ozone-depleting substances. However, another study warns that careful analysis is needed to account for a variety of natural factors that could confound our detection of ozone recovery.
The 2015 volcanoOne such factor is the presence of ozone-destroying volcanic dust in the stratosphere. Chile’s Calbuco volcano seems to have played a role in enhancing the size of the ozone hole in 2015.
At its maximum size, the 2015 hole was the fourth-largest ever observed. It was in the top 15% in terms of the total amount of ozone destroyed. Only 2006, 1998, 2001 and 1999 had more ozone destruction, whereas other recent years (2013, 2014 and 2016) ranked near the middle of the observed range.
Average ozone concentrations over the southern hemisphere during October 1-15, 2015, when the Antarctic ozone hole for that year was near its maximum extent. The red line shows the boundary of the ozone hole. Paul Krummel/CSIRO/EOS, Author providedAnother notable feature of the 2015 ozone hole was that it was at its biggest observed extent for much of the period from mid-October to mid-December. This coincided with a period during which the jet of westerly winds in the Antarctic stratosphere was particularly unaffected by the warmer, more ozone-rich air at lower latitudes. In a typical year, the influx of air from lower latitudes helps to limit the size of the ozone hole in spring and early summer.
The 2017 holeAs noted above, the ozone holes of 2013, 2014 and 2016 were relatively unremarkable compared with that of 2015, being close to the long-term average for overall ozone loss.
In general respects, these ozone holes were similar to those seen in the late 1980s and early 1990s, before the peak of ozone depletion. This is consistent with a gradual recovery of the ozone layer as levels of ozone-depleting substances gradually decline.
This year’s hole began to form in early August, and the timing was similar to the long-term average. Stratospheric temperatures during the Antarctic winter were slightly cooler than in 2016, which would favour enhancement of the chemical changes that lead to ozone destruction in spring. However, temperatures climbed above average in mid-August during a disturbance to the polar winds, delaying the hole’s expansion. As of the second week of September, the warmer-than-average temperatures have continued but the ozone hole has grown slightly larger than the long-term average since 1979.
Read more: Saving the ozone layer: why the Montreal Protocol worked.
While annual monitoring continues, which includes measurements under the Australian Antarctic Program, a more comprehensive assessment of the ozone layer’s prospects is set to arrive late next year. Scientists across the globe, coordinated by the UN Environment Program and the World Meteorological Organisation, are busy preparing the next report required under the Montreal Protocol, called the Scientific Assessment of Ozone Depletion: 2018.
This peer-reviewed report will examine the recent state of the ozone layer and the atmospheric concentration of ozone-depleting chemicals, how the ozone layer is projected to change, and links between ozone change and climate.
In the meantime we’ll watch the 2017 hole as it peaks then shrinks over the remainder of the year, as well as the ozone holes of future years, which will tend to grow less and less large as the ozone layer heals.
Andrew Klekociuk is employed by the Australian Antarctic Division and is funded by the Department of the Environment and Energy of the Australian government.
Paul Krummel is employed by CSIRO and receives funding from MIT, NASA, Australian Bureau of Meteorology, Department of the Environment and Energy, and Refrigerant Reclaim Australia.
Predicting disaster: better hurricane forecasts buy vital time for residents
Hurricane Irma (now downgraded to a tropical storm) caused widespread devastation as it passed along the northern edge of the Caribbean island chain and then moved northwards through Florida. The storm’s long near-coastal track exposed a large number of people to its force.
At its peak, Hurricane Irma was one of the most intense ever observed in the North Atlantic. It stayed close to that peak for an unusually long period, maintaining almost 300km per hour winds for 37 hours.
Both of these factors were predicted a few days in advance by the forecasters of the US National Hurricane Center. These forecasts relied heavily on modern technology - a combination of computer models with satellite, aircraft and radar data.
Read more: Irma and Harvey: very different storms, but both affected by climate change
Forecasting is getting betterAlthough Irma was a very large and intense storm, and many communities were exposed to its force, our capacity to manage and deal with these extreme weather events has saved many lives.
There are many reasons for this, including significant construction improvements. But another important factor is much more accurate forecasts, with a longer lead time. When Tropical Cyclone Tracy devastated Darwin in 1974, the Bureau of Meteorology could only provide 12-hour forecasts of the storm’s track, giving residents little time to prepare.
These days, weather services provide three to five days’ advance warning of landfall, greatly improving our ability to prepare. What’s more, today’s longer-range forecasts are more accurate than the short-range forecasts of a few decades ago.
We have also become better at communicating the threat and the necessary actions, ensuring that an appropriate response is made.
The improvement in forecasting tropical cyclones (known as hurricanes in the North Atlantic region, and typhoons in the northwest Pacific) hasn’t just happened by good fortune. It represents the outcome of sustained investment over many years by many nations in weather satellites, faster computers, and the science needed to get the best out of these tools.
Tropical cyclone movement and intensity is affected by the surrounding weather systems, as well as by the ocean surface temperature. For instance, when winds vary significantly with height (called wind shear), the top of the storm attempts to move in a different direction from the bottom, and the storm can begin to tilt. This tilt makes the storm less symmetrical and usually weakens it. Irma experienced such conditions as it moved northwards from Cuba and onto Florida. But earlier, as it passed through the Caribbean, a low-shear environment and warm sea surface contributed to the high, sustained intensity.
In Irma’s case, forecasters used satellite, radar and aircraft reconnaissance data to monitor its position, intensity and size. The future track and intensity forecast relies heavily on computer model predictions from weather services around the world. But the forecasters don’t just use this computer data blindly – it is checked against, and synthesised with, the other data sources.
In Australia, government and industry investment in supercomputing and research is enabling the development of new tropical cyclone forecast systems that are more accurate. They provide earlier warning of tropical cyclone track and intensity, and even advance warning of their formation.
Still hard to predict destructionBetter forecasting helps us prepare for the different hazards presented by tropical cyclones.
The deadliest aspects of tropical cyclones are storm surges (when the sea rises and flows inland under the force of the wind and waves) and flooding from extreme rainfall, both of which pose a risk of drowning. Worldwide, all of the deadliest tropical cyclones on record featured several metres’ depth of storm surge, widespread freshwater flooding, or both.
Wind can severely damage buildings, but experience shows that even if the roof is torn off, well-constructed buildings still provide enough shelter for their occupants to have an excellent chance of surviving without major injury.
By and large, it is the water that kills. A good rule of thumb is to shelter from the wind, but flee from the water.
Windy.com combines weather data from the Global Forecast System, North American Mesoscale and the European Centre for Medium-Range Weather Forecasts to create a live global weather map.This means that predicting the damage and loss caused by a tropical cyclone is hard, because it depends on both the severity of the storm and the vulnerability of the area it hits.
Hurricane Katrina in 2005 provides a good illustration. Katrina was a Category 3 storm when it made landfall over New Orleans, about as intense at landfall as Australian tropical cyclones Vance, Larry and Yasi. Yet Katrina caused at least 1,200 deaths and more than $US100 billion in damage, making it the third deadliest and by far the most expensive storm in US history. One reason was Katrina’s relatively large area, which produced a very large storm surge. But the other factor was the extraordinary vulnerability of New Orleans, with much of the city below normal sea level and protected by levées that were buried or destroyed by the storm surge, leading to extensive deep flooding.
We have already seen with Hurricane Irma that higher sea levels have exacerbated the sea surge. Whatever happens in the remainder of Irma’s path, it will already be remembered as a spectacularly intense storm, and for its very significant impacts in the Caribbean and Florida. One can only imagine how much worse those impacts would have been had the populations not been forewarned.
But increased population and infrastructure in coastal areas and the effects of climate change means we in the weather forecast business must continue to improve. Forewarned is forearmed.
Andrew Dowdy is working on a project funded through the National Environmental Science Programme (NESP) Earth Systems and Climate Change Hub.
Jeffrey David Kepert does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
More coal doesn’t equal more peak power
The proposed closure date for Liddell, AGL’s ancient and unreliable coal power station, is five years and probably two elections away. While AGL has asked for 90 days to come up with a plan to deliver equivalent power into the market, state and local governments, businesses and households will continue to drive the energy revolution.
At the same time as AGL is insisting they won’t sell Liddell or extend its working life, government debate has returned to the Clean Energy Target proposed by the Finkel Review. Now Prime Minister Malcolm Turnbull is suggesting a redesign of the proposal, potentially paving the way for subsidies to low-emission, high-efficiency coal power stations.
But even if subsidies for coal are built into a new “reliable energy target”, there’s no sign that the market has any appetite for building new coal. For a potential investor in a coal-fired generator, the eight years before it could produce a cash flow is a long time in a rapidly changing world. And the 30 years needed to turn a profit is a very long time indeed.
Read more: The true cost of keeping the Liddell power plant open
We also need to remember that baseload coal power stations are not much help in coping with peak demand – the issue that will determine whether people in elevators are trapped by a sudden blackout, per Barnaby Joyce. It was interesting that a Melbourne Energy Institute study of global pumped hydro storage mentioned that electricity grids with a lot of nuclear or coal baseload generation have used pumped storage capacity for decades: it’s needed to supply peak demand.
Solar power is driving down daytime prices – which used to provide much of the income that coal plants needed to make a profit. Energy storage will further reduce the scope to profit from high and volatile electricity prices, previously driven by high demand and supply shortages in hot weather, or when a large coal-fired generator failed or was shut down for maintenance at a crucial time.
Read more: Slash Australians’ power bills by beheading a duck at night
There is now plenty of evidence that the diverse mix of energy efficiency, demand response, energy storage, renewable generation and smart management can ensure reliable and affordable electricity to cope with daily and seasonal variable electricity loads. New traditional baseload generators will not be financially viable, as they simply won’t capture the profits they need during the daytime.
The government is now focused on AGL and how it will deliver 1,000 megawatts of new dispatchable supply. In practice, appropriate policy action would facilitate the provision of plenty of supply, storage, demand response and energy efficiency to ensure reliable supply. But the government is unable to deliver policy because of its internal squabbles, and AGL looks like a convenient scapegoat.
Demand response is already workingIt is astounding that conservatives can continue to blame renewable energy for increasing prices. They are either ignorant or have outdated agendas to prop up coal. A smart, efficient, renewable electricity future will be cheaper than any other – albeit not necessarily cheaper than our past electricity prices.
Along with other studies, CSIRO’s recent Low Emission Technology Roadmap showed that the “ambitious energy productivity (and renewable energy)” scenario was quite reasonably priced.
While the debate continues to focus on large-scale supply, “behind the meter” action is accelerating through demand response, energy efficiency and on-site renewables. As I mentioned in a previous column, the ARENA/AEMO demand response pilot has attracted almost 700MW of flexible demand reduction to be delivered before Christmas, and another 1,000MW by December 2018. That’s nearly as much as Liddell could supply flat out. And there’s plenty more where that came from.
Spending a few hundred million dollars to prop up an old coal plant for a few years would shift it to the high-cost end of coal generators. So when prices fall, it would be one of the first coal plants to have to shut down, and among the last to come back online when prices rebound. This would add to the stress on the facility and the management challenges of operating it – unless it had preferential cheap access to a lot of pumped hydro capacity.
In the medium to long term, we do need to work out how to supply electricity for 24/7 industries but, according to AEMO, this is not urgent. We don’t know how much of that kind of industry will be here in ten years or so, given high gas prices, the age of their industrial plants, and their relatively small scale relative to their international competitors.
On the other hand, they may adapt by investing in behind-the-meter measures. Or they could relocate to sunny places and be part of what the economist Ross Garnaut has called the “low-carbon energy superpower”.
DisclosureAlan Pears has worked for government, business, industry associations public interest groups and at universities on energy efficiency, climate response and sustainability issues since the late 1970s. He is now an honorary Senior Industry Fellow at RMIT University and a consultant, as well as an adviser to a range of industry associations and public interest groups. His investments in managed funds include firms that benefit from growth in clean energy. He has shares in Hepburn Wind.
Explainer: how does the sea 'disappear' when a hurricane passes by?
You may have seen the media images of bays and coastlines along Hurricane Irma’s track, in which the ocean has eerily “disappeared”, leaving locals amazed and wildlife stranded. What exactly was happening?
These coastlines were experiencing a “negative storm surge” – one in which the storm pushes water away from the land, rather than towards it.
Read more: Irma and Harvey: very different storms, but both affected by climate change
Most people are familiar with the idea that the sea is not at the same level everywhere at the same time. It is an uneven surface, pulled around by gravity, such as the tidal effects of the Moon and Sun. This is why we see tides rise and fall at any given location.
At the same time, Earth’s atmosphere has regions where the air pressure is higher or lower than average, in ever-shifting patterns as weather systems move around. Areas of high atmospheric pressure actually push down on the ocean surface, lowering sea level, while low pressure allows the sea to rise slightly.
This is known as the “inverse barometer effect”. Roughly speaking, a 1 hectopascal change in atmospheric pressure (the global average pressure is 1,010hPa) causes the sea level to move by 1cm.
When a low-pressure system forms over warm tropical oceans under the right conditions, it can intensify to become a tropical depression, then a tropical storm, and ultimately a tropical cyclone – known as a hurricane in the North Atlantic or a typhoon in the northwest Pacific.
As this process unfolds, the atmospheric pressure drops ever lower and wind strength increases, because the pressure difference with surrounding areas causes more air to flow towards the storm.
In the northern hemisphere tropical cyclones rotate anticlockwise and officially become hurricanes once they reach a maximum sustained wind speed of around 120km per hour. If sustained wind speeds reach 178km per hour the storm is classed as a major hurricane.
Surging watersA “normal” storm surge happens when a tropical cyclone reaches shallow coastal waters. In places where the wind is blowing onshore, water is pushed up against the land. At the same time the cyclone’s incredibly low air pressure allows the water to rise higher than normal. On top of all this, the high waves whipped up by the wind mean that even more water inundates the coast.
The anticlockwise rotation of Atlantic hurricanes means that the storm’s northern side produces winds blowing from the east, and its southern side brings westerly winds. In the case of Hurricane Irma, which tracked almost directly up the Florida panhandle, this meant that as it approached, the east coast of the Florida peninsula experienced easterly onshore winds and suffered a storm surge that caused severe inundation and flooding in areas such as Miami.
The negative surgeIn contrast, these same easterly winds had the opposite effect on Florida’s west coast (the Gulf Coast), where water was pushed offshore, leading to a negative storm surge. This was most pronounced in areas such as Fort Myers and Tampa Bay, which normally has a relatively low tide range of less than 1m.
The negative surge developed over a period of about 12 hours and resulted in a water level up to 1.5m below the predicted low tide level. Combined with the fact that the sea is shallow in these areas anyway, it looked as if the sea had simply disappeared.
Read more: Predicting disaster: better hurricane forecasts buy vital time for residents.
As tropical cyclones rapidly lose energy when moving over land, the unusually low water level was expected to rapidly rise, which prompted authorities to issue a flash flood warning to alert onlookers to the potential danger. The negative surge was replaced by a storm surge of a similar magnitude within about 6 hours at Fort Myers and 12 hours later at Tampa Bay.
Rising waters are the deadliest aspect of hurricanes – even more than the ferocious winds. So while it may be tempting to explore the uncovered seabed, it’s certainly not wise to be there when the sea comes rushing back.
Darrell Strauss receives funding from an Advance Queensland Research Fellowship in partnership with Griffith University and the City of Gold Coast.
How Antarctic ice melt can be a tipping point for the whole planet's climate
Melting of Antarctica’s ice can trigger rapid warming on the other side of the planet, according to our new research which details how just such an abrupt climate event happened 30,000 years ago, in which the North Atlantic region warmed dramatically.
This idea of “tipping points” in Earth’s system has had something of a bad rap ever since the 2004 blockbuster The Day After Tomorrow purportedly showed how melting polar ice can trigger all manner of global changes.
But while the movie certainly exaggerated the speed and severity of abrupt climate change, we do know that many natural systems are vulnerable to being pushed into different modes of operation. The melting of Greenland’s ice sheet, the retreat of Arctic summer sea ice, and the collapse of the global ocean circulation are all examples of potential vulnerability in a future, warmer world.
Read more: Chasing ice: how ice cores shape our understanding of ancient climate.
Of course it is notoriously hard to predict when and where elements of Earth’s system will abruptly tip into a different state. A key limitation is that historical climate records are often too short to test the skill of our computer models used to predict future environmental change, hampering our ability to plan for potential abrupt changes.
Fortunately, however, nature preserves a wealth of evidence in the landscape that allows us to understand how longer time-scale shifts can happen.
Core valuesOne of the most important sources of information on past climate tipping points are the kilometre-long cores of ice drilled from the Greenland and Antarctic ice sheets, which preserve exquisitely detailed information stretching back up to 800,000 years.
The Greenland ice cores record massive, millennial-scale swings in temperature that have occurred across the North Atlantic region over the past 90,000 years. The scale of these swings is staggering: in some cases temperatures rose by 16℃ in just a few decades or even years.
Twenty-five of these major so-called Dansgaard–Oeschger (D-O) warming events have been identified. These abrupt swings in temperature happened too quickly to have been caused by Earth’s slowly changing orbit around the Sun. Fascinatingly, when ice cores from Antarctica are compared with those from Greenland, we see a “seesaw” relationship: when it warms in the north, the south cools, and vice versa.
Attempts to explain the cause of this bipolar seesaw have traditionally focused on the North Atlantic region, and include melting ice sheets, changes in ocean circulation or wind patterns.
But as our new research shows, these might not be the only cause of D-O events.
Our new paper, published today in Nature Communications, suggests that another mechanism, with its origins in Antarctica, has also contributed to these rapid seesaws in global temperature.
Tree of knowledge The 30,000-year-old key to climate secrets. Chris Turney, Author providedWe know that there have been major collapses of the Antarctic ice sheet in the past, raising the possibility that these may have tipped one or more parts of the Earth system into a different state. To investigate this idea, we analysed an ancient New Zealand kauri tree that was extracted from a peat swamp near Dargaville, Northland, and which lived between 29,000 and 31,000 years ago.
Through accurate dating, we know that this tree lived through a short D-O event, during which (as explained above) temperatures in the Northern Hemisphere would have risen. Importantly, the unique pattern of atmospheric radioactive carbon (or carbon-14) found in the tree rings allowed us to identify similar changes preserved in climate records from ocean and ice cores (the latter using beryllium-10, an isotope formed by similar processes to carbon-14). This tree thus allows us to compare directly what the climate was doing during a D-O event beyond the polar regions, providing a global picture.
The extraordinary thing we discovered is that the warm D-O event coincided with a 400-year period of surface cooling in the south and a major retreat of Antarctic ice.
When we searched through other climate records for more information about what was happening at the time, we found no evidence of a change in ocean circulation. Instead we found a collapse in the rain-bearing Pacific trade winds over tropical northeast Australia that was coincident with the 400-year southern cooling.
Read more: Two centuries of continuous volcanic eruption may have triggered the end of the ice age.
To explore how melting Antarctic ice might cause such dramatic change in the global climate, we used a climate model to simulate the release of large volumes of freshwater into the Southern Ocean. The model simulations all showed the same response, in agreement with our climate reconstructions: regardless of the amount of freshwater released into the Southern Ocean, the surface waters of the tropical Pacific nevertheless warmed, causing changes to wind patterns that in turn triggered the North Atlantic to warm too.
Future work is now focusing on what caused the Antarctic ice sheets to retreat so dramatically. Regardless of how it happened, it looks like melting ice in the south can drive abrupt global change, something of which we should be aware in a future warmer world.
Chris Turney receives funding from the Australian Research Council.
Jonathan Palmer receives funding from the Australian Research Council (ARC).
Peter Kershaw has received fundng from the Australian Research Council.
Steven Phipps receives funding from the Australian Antarctic Science Program, the Australian Research Council, the International Union for Quaternary Research, the National Computational Infrastructure Merit Allocation Scheme, the New Zealand Marsden Fund, the University of Tasmania and UNSW Australia.
Zoe Thomas receives funding from the Australian Research Council.
Politics podcast: Mark Butler on energy uncertainty
Pressure is mounting on the government to put an end to energy uncertainty as an Australian Energy Market Operator (AEMO) report warns of looming power shortages over the next few years.
Opposition climate change and energy spokesman Mark Butler has written about the toxic divisions on energy policy in his recent book, Climate Wars. He recognises there are challenges in the Coalition partyroom over the Finkel report, but says Labor will negotiate with the government on an energy framework. It wants to avoid an ALP government inheriting the policy chaos.
Responding to the government’s push to extend the life of the Liddell power station, he says Malcolm Turnbull has unfairly concluded there is only one option.
“With a proper investment framework in place, new investment that will last decades, not just a few more years … could take place. At the moment we have an investment strike and if we can’t end the investment strike then yes in five years time in NSW we will be in a position of supply shortage.”
On the future of coal, Butler says it’s still “a massive part of our system”, and while usage will go down over time, it will be a part of the system for “as far as we politically can see”.
“The problem is not old coal power plants closing, it’s that nothing is being put in to replace them.”
On alternative sources like battery power he is optimistic about their potential, while sceptical of expanding hydro power until the results of a feasibility study are produced.
Michelle Grattan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.