The more AI, the less understanding?
Roger Highfield OBE FRSB FMedSci
Science Director, Science Museum Group
Visiting professor at the Dunn School, University of Oxford, and the Department of Chemistry, UCL.
Published on November 20th, 2024
In the hyperbole about artificial intelligence, there have been claims about how AI, in the guise of large language models, can plausibly, confidently and succinctly summarise complex science [1], accompanied by warnings about its propensity for bias, factual inaccuracies and, most notoriously, to ‘hallucinate’ fake references.[2] As was the case with the internet, this fabulous tool has to be used wisely if we are to get the most benefit.
But there is a deeper problem with gazing at the world through an AI lens since, if anything, it can obscure the process of science.
Over the past few centuries, science has advanced by a never-ending synergy of empiricism, in the form of observation and experiment, and of reason, in the form of mathematical theory and mechanistic understanding.[3]
Today, however, with the rise of AI, its most enthusiastic proponents, notably big tech companies, believe there is another way to make progress: train an AI with enough data about how the world works, and use it to infer the answers.
AI seems to be able to magic these answers out of thin air just as Srinivasan Ramanujan amazed mathematicians a century ago by his ability to divine answers to problems (He gave credit to Namagiri, a local incarnation of Lakshmi, the Hindu goddess of good fortune).
But it takes trial and error to decide on the type and architecture of artificial neural network for a given problem. Moreover, AI is cheap on assumptions but exceeding expensive on parameters: deep learning refers to the number of layers of an AI’s neural network through which data is transformed, and the total number of parameters is the sum of all the weights in the network. This can number many billions.
The astronomical number of parameters is both the reason AIs can fit data so well and why they give no insight into the answers they provide. These parameters have no significance in the real world: they exist to fit data, not to understand it. Quantifying the uncertainties in an AI’s predictions is undermined by the sheer number of parameters. Moreover, while you can train an AI on one data set you can’t be sure how it’ll fare on a new one.
Many kinds of AI (generative methods) even rely on random number generators, so they give different answers each time they are run, which means they can only be probabilistically true.
Ultimately AI rests on statistical inference and, as the old saw goes, correlation is not causation. And although you might think computers are objective, humans still play a central role in how AIs are set up and used and the assumptions on which they rest. Overall, computers are less trustworthy than many think.[4]
I am constantly surprised that there is not more unease about how relying on AI, a glorified form of look-up table, can sidestep the need for understanding, the kind that can give the public real insights into how the world works. The foundations of public engagement rest on providing a mechanistic view and yet the rise of AI erects a barrier to understanding of science.
The kinds of tangible demonstrations that electrified audiences in the Royal Institution centuries ago are harder to devise now that 21st century science has moved from magnets, wires and cannonballs to the invisible worlds of the atom, big data and gravitational waves.
As science shimmers into the arcane and abstract, AI now seems to also wave away any need for explanation, offering answers that are akin to the mathematical intuition so brilliantly deployed by Ramanujan. The problem is that intuition is harder to teach than constructing rigorous proofs, which are also crucial if you are to convince others that you are right. They are just statistical predictions yet, because they don’t understand it, most people think that they are precise Delphic Oracles.
And when it comes to science itself, there is a problem with AI that is little discussed: aside from the lack of mechanistic insights, it lacks transparency, objectivity and reproducibility too[5]: the latter is crucial because science makes real progress when others can replicate a finding that changes the way we understand the world.
There are moves to remedy these shortcomings, for instance with some call Explainable AI. But, until these more insightful forms of AI are commonplace, the lack of transparency about how and why AI comes to its conclusions is corrosive: how is scientific understanding going to thrive when we have a tool that provides correlations, not causation? It won’t. One can see the attraction of AI in complex fields such as biology and medicine, but how can we trust the conclusions of something we don’t understand? We can’t.
Even though they are ‘black boxes’, AIs are undoubtedly useful. I argued in my book Virtual You [6], with my UCL colleague Peter Coveney, that AI could have an important role when comes to making digital twins of the human body. For example, when the mathematics becomes intractable, it is now possible to simulate the behaviour of the body’s subsystems with an AI “surrogate,” a machine learning algorithm trained with a few well-chosen simulations (though it has to be said that multiscale surrogates are proving hard to find).
The good news is that it is possible to make progress in the interim with ‘Big AI’ [7], where the brute force of machine learning is augmented with physics-based models, mathematical theories of how the world works. In fluid dynamics, for example, so called hybrid modelling – combining physics-based and data- driven modelling – has shown advantages over using pure physics based or machine learning models. In the area of drug design, the approach has been used, for instance, in predicting antimicrobial resistance, classifying the shape of enzymes, and to help develop pandemic drugs at pandemic speed.
While AI is a powerful tool, its use in science must be curated to ensure it enhances, rather than compromises, scientific integrity. In particular, we have much to do in those scientific fields where understanding the "why" and "how" is as important as the results themselves.
Most of all, we need to do more to ensure that the public can be confident that the output of an AI is not merely plausible but reliably correct.
[1] https://arxiv.org/pdf/2405.00706
[2] https://royalsocietypublishing.org/doi/10.1098/rsos.240197
[3] https://royalsocietypublishing.org/doi/10.1098/rsta.2016.0153
[4] https://royalsocietypublishing.org/doi/full/10.1098/rsta.2020.0067
[5] https://pubs.acs.org/doi/full/10.1021/acs.jcim.4c01091
[6] https://press.princeton.edu/books/hardcover/9780691223278/virtual-you
[7] https://www.worldscientific.com/doi/abs/10.1142/9789811265679_0021
[8] https://royalsocietypublishing.org/doi/10.1098/rsfs.2021.0018
Copyright: © 2024 [author(s)]. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in Frontiers Policy Labs is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.