Beyond artificial safety

Use AI as support
© rawpixel.com @ freepik.com
The hype around AI and the associated unsubstantiated claims put forward by many AI labs are partly responsible for the risks of AI not being taken into account to an adequate extent, says Heidy Khlaaf.

In reality, however, the nature of AI systems is that they are not based on factual evidence: they provide outcomes based on statistical and probabilistic inferences. This means that they have persistent accuracy issues, making them unsuitable for applications that require precision and safety criticality. And in my view, AI should not be used in decision-making that impacts the livelihoods or safety of human beings.
She means the use of AI systems in border governance or autonomous weapons applications, for example. In her expert opinion, AI should augment human capabilities rather than replace them. “Currently, there is a complete disregard of AI’s pitfalls and susceptibility to error, including in the way AI systems are trained, which embeds human bias and discrimination. This leads to real harms.”
An early fascination
As a teenager, Dr Heidy Khlaaf was fascinated by AI and robotics and started programming at the age of 15. These days, she still emphasises the benefits that they can bring. “For example, they can augment and automate workloads in manufacturing that involve potentially dangerous manual labour.” Safety engineering is currently the main focus of her work, however. She has already analysed AI use in numerous safety-critical applications – such as nuclear energy, autonomous driving and unmanned aerial vehicles.
Nuclear power – making a comeback?
Nowadays, the big tech companies are turning to nuclear power to provide for the rising energy consumption of AI. But this technology cannot permanently satisfy the energy hunger of artificial intelligence, says Dr Heidy Khlaaf. “In view of the time factor alone, nuclear power isn’t technically feasible. Nuclear power plants are simply incompatible with the pace at which tech companies are building data centres and deploying AI – after all, the average time to build nuclear power plants ranges from 10 to even 20 years.” She also draws attention to the risks associated with nuclear power such as high-level radioactive waste and uncontrolled release of radioactive material and emphasises: “These cannot be justified by the ubiquitous deployment of AI. What is needed here is proportionality between the risks and the potential benefits.”
Dr Heidy Khlaaf is also critical of AI companies like Google, Amazon and Oracle who are presenting Small Modular Reactors (SMRs) as a solution for the rising energy demand while the feasibility of SMRs is still not clear. “They are still under development, with only a handful in operation or testing. And any that do pass would then be required to undergo licensing, approval, construction and regulatory processes. So SMRs may not be available for another decade. This is incompatible with the speed at which AI is being deployed.” An increase in nuclear waste is another factor weighing against SMRs, in Heidy Khlaaf’s view.
Lack of safety culture – then and now
The reactivation of a reactor at Three Mile Island (TMI), which Microsoft is aiming for in 2028, is also viewed critically by the Chief Scientist. “This will require regulatory approval. However, it's also due to undergo a relicensing process in 2034.” Although another relicensing approval may not be guaranteed, Heidy Khlaaf sees a justified concern here that the urgency of AI’s immediate energy needs may put immense pressure on regulators to potentially disregard any risks. “The irony in this is that the Three Mile Island accident in 1979 was mainly caused by a lack of safety culture. If tech corporations with no nuclear safety record and culture are able to influence the operation of a nuclear plant, this raises numerous questions about the impact on safety but also about the power concentration that would result.”
No bureaucracy for its own sake
There are also plans to use AI in the approval process for nuclear power plants in future – if Microsoft has its way. The company is already training large language models (LLMs) to fast-track the process. “Producing highly structured documents for safety-critical systems is not in fact a box-ticking exercise; it is actually a safety process in itself. The nuclear regulatory process is not bureaucratic for the sake of it; it’s there to keep everyone safe. Even the most minute of failures in a plant can cascade into a catastrophic or high-risk event. To view these regulatory processes as merely burdensome paperwork speaks volumes about the tech corporations’ understanding, or major lack thereof, of nuclear safety.”
Regulating for safety
From scientist Heidy Khlaaf’s perspective, regulation of artificial intelligence is therefore essential. The EU AI Act is a first step, but in her view, it lacks a clear definition of systemic risk. “In the AI Act, this is exceptionally inconsistent and broad. The legislation lumps together concepts like system safety with broader societal, financial and economic risks. As these risks require very different mitigation strategies, the fuzzy definition of systemic risk means that the measures listed in the obligations are fractured and often inconsequential.”
International regulation would also be sensible from Dr Heidy Khlaaf’s perspective, not only to mitigate risks and harms that stem from use, but to also correct some of the systemic issues. “There is an immense concentration of power in AI infrastructure, which has now expanded to our constrained energy sources as well.”
Dr Heidy Khlaaf has worked for the AI Now Institute since 2024. She specialises in safety engineering, analysing the benefits and drawbacks of artificial intelligence (AI) use in safety-critical applications. She holds a doctorate in Computer Science. She has also worked on safety audits for nuclear power plants and led the development of standards and evaluation frameworks for safety-critical systems.
Further Information