In Focus

“AI algorithms are unsuitable for safety-critical applications”

Interview with Dr. Heidy Khlaaf (The AI Now Institute)

Dr. Heidy Khlaaf (The AI Now Institute)

Christiane Weihe

Whether in autonomous vehicles or nuclear power plants - artificial intelligence plays a role in many technical applications or is expected to do so in the future. However, we are not adequately considering the safety of its use, says Dr Heidy Khlaaf. The Chief AI Scientist at The AI Now Institute is an expert in safety-critical applications. In an interview with eco@work, she explains why the use of nuclear energy is problematic for AI's growing hunger for energy and which safety gaps the EU AI Act leaves.

Dr Khlaaf, why aren’t the risks of AI better taken into account?

There is an overall lack of understanding of the nature of AI systems themselves and how unreliable they are, which causes policymakers to disregard the risks intrinsic to them. These challenges have often stemmed from unsubstantiated claims and the AI hype. When in reality, the nature of AI systems is to provide outcomes based on statistical and probabilistic inferences and not any type of reasoning or factual evidence. This means that AI algorithms have persistent accuracy issues, making them unsuitable for applications that require precision and safety-criticality.

The big tech companies are bringing nuclear power in play to provide for the rising energy consumption of data centres. Does that make sense?

The matter of fact is, nuclear power requires significantly longer timescales to build that are incompatible with the pace at which tech companies are building data centres and deploying AI. The average time to build nuclear power plants has ranged from 10 to 20 years. So any immediate investment in nuclear energy will not satisfy or meet energy demands now, or in 10 years’ time, needed to alleviate pressure from AI usage.

Google, Amazon and Oracle are heading to SMRs. How safe is this technology and which risks does this have?

SMRs by design are actually safer than larger nuclear plants, but there are several obstacles we still face with their potential. First, SMRs are still under development, with over 80 designs in progress, but only a handful in operation or testing. Any successful designs would then be required to undergo licensing, permitting, construction and regulatory activities. Second, SMRs will also lead to an increase of nuclear waste. Some studies have found that SMRs may in fact create greater and more complex nuclear waste per unit of energy produced than large power plants.

Microsoft wants to reactivate a reactor in Three Mile Island. How safe is that?

Three Mile Island is due to re-open in 2028, pending regulatory approval. However, it's also due to undergo a relicensing process in 2034. There is a concern here that the urgency for immediate energy needed for AI may put unprecedented pressure on regulators to meet demands and potentially disregard any risks uncovered. The irony in this is that the Three Mile Island accident in 1979 showed that its root cause was primarily a lack of safety culture.

Microsoft is already training LLMs to fast-track the process of nuclear approval in the US. What do you think of this?

Producing highly structured documents for safety-critical systems is a safety process in itself. Nuclear power plants are highly complex systems. Even the most minute of failures can cascade into a catastrophic or high-risk event. To view these regulatory processes as merely burdensome paperwork speaks volumes about their understanding, or lack thereof, of nuclear safety.

Is the EU AI Act sufficient with regard to safety issues?

From the perspective of safety engineering, a key challenge is that the AI Act’s definition of “systemic risk” is exceptionally inconsistent and broad. It lumps together concepts like system safety with broader societal, financial and economic risks. As these risks require very different mitigation strategies, the fuzzy definition renders measures listed in obligations fractured and often inconsequential.

Thank you for talking to eco@work.
The interviewer was Christiane Weihe.

---

Talking to eco@work: Dr Heidy Khlaaf, Chief AI Scientist at The AI Now

 

Further information

Dr Heidy Khlaaf
Chief AI Scientist
The AI Now Institute

E-Mail: hello@heidyk.com
Web:   https://www.heidyk.com

Profile

Dr Heidy Khlaaf has long been fascinated by artificial intelligence and robotics - she started programming at the age of 15. She later earned a bachelor's degree in computer science and philosophy from Florida State University and a PhD in computer science from University College London. She sees many advantages of AI, for example in supplementing and automating work processes. At the same time, she is intensively concerned with the safety of the use of artificial intelligence. In the past, Khlaaf has worked as the technical manager of the AI security team at Trail of Bits and as a senior engineer for system security at OpenAI.

The expert has already worked on safety audits for nuclear power plants and has led the development of standards and audit frameworks for safety-relevant applications. Since 2024, Dr Heidy Khlaaf has been working as Chief AI Scientist for The AI Now Institute. The focus of her work is on safety engineering. She analyses the advantages and disadvantages of using artificial intelligence in safety-critical applications - such as autonomous driving, unmanned aerial vehicles and nuclear energy.