Ally or adversary?

© Lidiia Moor @istock.com
Christiane Weihe
Artificial intelligence is transforming our world – and has been doing so for years. We use it to translate texts and are increasingly accepting AI-based systems’ recommendations of films or music that we might like. This development has its upsides, for it promises to deliver more efficiency and introduces entirely new functions and improvements in key areas of life – in medical diagnostics, for example. But it also poses various risks: discrimination against certain demographic groups, job losses, disinformation, or even risks to our survival from the deployment of AI in weapons systems are just a few examples that come to mind. The profound changes within society resulting from AI also have implications for the environment and climate. AI could help to protect them, but right now, all too often, it does them more harm than good.
AI-based systems are taking over a multitude of functions and are increasingly operating autonomously – driving trains in metro rail networks, for example. Many systems are designed to continuously adapt to a changing environment; in other words, they are continually learning. An AI-based system that suggests products for consumers aligns itself with its users’ preferences, thus ensuring that its suggestions are always improving. “A key basis for AI systems’ ‘behaviour’ is provided by the data on which they are trained and optimised. If these data are biased, incomplete or inaccurate in any way, this is reflected in the systems’ output,” says Dr Peter Gailhofer, a lawyer at the Oeko-Institut. “But that’s just one reason why we cannot simply allow AI and its providers to do whatever they want – and why clear rules are required.”
A first regulation
In May 2024, the EU adopted the world's first law aimed at uniform regulation of AI. “The legislation is primarily intended to protect important basic rights, prevent misuse of AI and mitigate safety risks. It follows a risk-based approach: its provisions mainly concern systems and models that are associated with a particular risk – for example, if they are deployed in critical infrastructure like power grids or are used in decision-making that has a major impact on people’s lives, such as access to public services,” Peter Gailhofer explains. “Indeed, the use of AI systems is prohibited if they threaten personal autonomy. This includes applications that manipulate people or classify them on the basis of personal characteristics, for example.”
Scientists at the Oeko-Institut continuously monitored the development of the AI Regulation, including as part of a joint project with researchers at the Society for Institutional Analysis (sofia), the Independent Institute for Environmental Issues (UfU), Jade University of Applied Sciences and the German Research Center for Artificial Intelligence. Among other things, they analysed the European Parliament’s proposal in their Policy Brief “The European Parliament’s Amendments to the AI Act”, with a particular focus on environmental and climate issues. “One of the strengths of the parliamentary draft was that unlike the Commission’s proposal, it also looked at environmental risks and thus broadened the focus beyond the specific human interests that dominated many of the debates,” says Peter Gailhofer. After all, artificial intelligence has a colossal impact on the environment and climate. “For example, there is now more discussion of the resource demand for hardware and the environmental problems that may be associated with its disposal, or of water usage in data centres’ cooling systems.” ChatGPT uses half a litre of water for a conversation with up to 50 prompts and answers. High and ever-increasing energy demand in data centres is also a major problem and could even put the energy transition at risk (for more detailed insights on this topic, see “Infinite growth” on p. 10).
The use of AI-based systems also poses indirect risks to the environment and climate, although they are even less a focus of debate. “Ultimately, the risks depend on which goals the systems are geared towards. This can be illustrated by examples from logistics: if the intention is to make supply chains as cost-effective as possible, their carbon emissions can increase dramatically unless there are clear environmental rules in place.” Similar risks are identified for agriculture as well. “If the aim here is to achieve a high yield, AI is likely to opt in favour of excessive use of fertilisers and won’t conserve soil and water resources.” Indirect effects can also arise in relation to individual consumption if AI persuades users to engage in more – or more harmful – consumerism.
“The European Parliament’s draft included some sensible approaches to protect the environment and the climate that were missing from the Commission’s proposal,” says Dr Peter Gailhofer. “One example is the requirement to assess and mitigate foreseeable environmental risks. Sadly, most of these proposals were then deleted from the Regulation in the trilogue. The version that was adopted contains barely any binding provisions of relevance to the environment.” It does include a requirement for AI providers to state the known or estimated energy and water consumption of so-called large language models like ChatGPT. “But there is still no uniform methodology or limit values here. It is also unclear what the consequences would be if this consumption is too high. In sum, a major opportunity for broad-based embedding of environmental and climate aspects has been missed.”
Even so, from the Oeko-Institut’s perspective, the Regulation makes an important contribution to protecting human rights and safety interests. What’s also encouraging, in the experts’ view, is that the AI Regulation is seen as a flexible set of rules that is intended to respond dynamically to practical lessons learned. “Mechanisms like these should be used to close the gaps in environmental sustainability as swiftly as possible.”
Critical knowledge
As Dr Peter Gailhofer also emphasises, we know that there are major gaps in our knowledge. “We still have much to learn about the complex interactions between AI and society and what its use will mean from an environmental and climate perspective. This is critical knowledge when it comes to regulation.” However, these knowledge gaps can be closed, he says. “But for that, far more transparency is required around issues such as training data. Open access and open source rules and comprehensive research data access would also make sense. Researchers would then be able to identify problems and risks and generate information about policy options.”
Risky dependencies
In light of the above, it is already clear that the AI Regulation does not go far enough in responding to all the risks associated with this technology and leveraging its environmental potential, says Dr Peter Gailhofer. “The most realistic solution, from my perspective, lies in sector-specific rules which are able to respond much more effectively to the challenges in the individual fields of application and which would help to bring together the environment agencies’ specialist knowledge and AI-specific expertise.”
In a project titled “Regulatory Concept for Algorithmic Decision-making Systems under Environmental Law” on behalf of the German Environment Agency (UBA), the researchers – together with UfU and sofia – have looked at this form of sector-specific regulation. “Wherever AI is used, it can worsen environmental problems, but it can also contribute to their solution. It’s essential to address this issue. This can be done with an overarching regulation, but it may work better with one that focuses on specific issues.” The Oeko-Institut has therefore developed a form of regulatory toolbox that is intended to help decision-makers from various environmental policy fields regulate AI applications effectively in future. Sector-specific rules with this type of focus can feasibly exist alongside the AI Regulation, in the team’s view.
The project team has examined a multitude of questions, some relating to matters of principle. For example, the team began by identifying the subject-matter for regulation and developing strategies for evaluating risks of relevance to environmental law, as well as instruments to regulate AI applications. “The aim is to use the law to help identify and mitigate environmental risks. From our perspective, incidentally, a failure to leverage the potential of these technologies in mitigating environmental impacts should also be classed as a risk.” Due to the high complexity of risk assessment and the fact that AI is continually evolving, an institutional framework is required that enables the dynamic expansion of the knowledge base in order to improve decision-making. “Environmental law, but also some individual instruments provided for in the AI Regulation, offer various models via which our practical knowledge of risks and potentialities can be broadened and appropriate responses developed,” says Peter Gailhofer. With that aim in mind, the project proposes numerous instruments that may help to avoid negative environmental impacts and improve the prospects for environmentally meaningful applications to be developed and achieve commercial success.
Take a closer look!
Dr Peter Gailhofer laments the fact that policy-makers are not looking more closely at AI –especially in the light of previous experience. “We still don’t fully understand how to get to grips with the major impacts of Web 2.0 and social media on democracy and society. And now we have another, perhaps even bigger upheaval on our doorstep.” Timely regulation is also important, he says, because the possible consequences for our society are so profound and far-reaching. “Artificial intelligence is permeating all sectors of society and thus creating dependencies. That’s not something that can easily be reversed later on.”
---
Dr Peter Gailhofer is a lawyer in the Oeko-Institut’s Environmental Law and Governance Division, where he analyses regulatory strategies in the social-ecological transformation and how they can be developed further. He is also the Research Coordinator for Digital Ethics and Governance. The role of law in digital value chains and product life cycles is a key focus of his work.
Contacts at the Oeko-Institut
Further information
Press release: Ecological alignment of Artificial Intelligence
Press release: The environmental impacts of AI need to become more visible
Further information (in German only)
Luiza’s Newsletter: AI and Environmental Impact
Further information (in German only)
Article on the Hugging Face website: The Environmental Impacts of AI