A study by ľƵ, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers and makes decisions.
The team have developed a prototype system with both a “brain” and a memory. Unlike conventional AI, it can learn continuously without losing past knowledge, avoids forming false or misleading memories, and mimic aspects of human thinking – such as strengthening or forgetting information over time – in a clear, controllable way.
“Intelligence has long been treated as something that emerges inside a black box,” said lead author, Dr Natalia Janson, of ľƵ’s Department of Mathematical Sciences. “We wanted to rethink AI from the ground up – and we’ve built a system where the inner workings of cognition are fully transparent.”
In early tests, the prototype showed promising results. In simple demonstrations, it learned musical notes and short musical phrases without supervision and identified and stored colours from visual data such as cartoon images. Across all tasks, it behaved in a predictable and traceable way, while avoiding common problems seen in AI, including “catastrophic forgetting” and the formation of false memories.
Researchers say a major obstacle to transparent AI has been a limited understanding of how memory, behaviour and physical structure interact in natural intelligence. This gap has made it difficult to design systems that can both perform complex tasks and explain how they do so.
At the centre of the new approach is a mathematical concept called a “plastic vector field”, which models how information changes over time in a way that reflects how the brain processes and stores it. This allows each stage of the AI’s learning and cognition to be tracked, with transparency built in from the outset rather than added later.
“To build intelligent systems that behave transparently and as intended, we need to address some fundamental questions,” Dr Janson said. “How do we recognise intelligence in humans? Through behaviour. What drives that behaviour? Brain activity. But what underlies brain activity itself? What is the ‘code’ of the brain, and how are memory and physical structure connected? These are the questions we’ve tried to answer.”
The team also examined existing artificial neural networks and found that many of their limitations may stem from how they are designed.
“What’s exciting – and a bit surprising – is that we can now clearly see why current neural networks struggle with explainability,” said one of the study’s authors Professor Alexander Balanov, of ľƵ’s Department of Physics. “It’s not just a technical hurdle. Their design makes it impossible to fully control how they learn and store information. That’s why new approaches like ours are so important.”
The prototype system is still relatively simple and needs to be scaled up for real-world use. The Loughborough team plan to develop it further and explore how it could be applied more widely – including in new types of hardware – with the aim of creating AI that is both powerful and easier to understand and trust.
Professor Balanov said: “Ultimately, this research brings us closer to technologies that people can confidently rely on in everyday life – from safer healthcare tools to more accountable automated decision-making.”