Analyzing internal world models of humans, animals and AI

4 Min Read

A team of scientists led by Prof. Dr. Ilka Diester, professor of optophysiology and spokesperson for the BrainLinks-BrainTools research center at the University of Freiburg, has developed a formal description of internal world models and published it in the journal Neuron. The formalized view helps scientists better understand the development and operation of internal world models. It makes it possible to systematically compare world models of humans, animals and artificial intelligence (AI). This makes it clearer, for example, where AI still has shortcomings compared to human intelligence and how it can be further developed in the future. Eleven Freiburg researchers from four faculties were involved in the interdisciplinary publication.

Internal world models: making predictions based on experience

People and animals abstract general laws from everyday experiences. They develop internal models of the world that help them find their way in unfamiliar contexts. Based on the abstracted models, they can make predictions in new situations and behave accordingly. For example, knowing similar cities that also have a city center, pedestrian zones and public transport can help them find their way in a foreign city. Even in social contexts such as dining in a restaurant, similar experiences help you behave decently.

World models become more tangible with the help of a new formal description

To formalize internal world models for different species, in their current publication the researchers distinguish between three abstract spaces that are intertwined: the task space, the neural space and the conceptual space. The task space includes everything an individual experiences. Neuronal space describes the different measurable states of the brain, from the molecular level to the activity of individual neurons to the activity of entire brain regions. The latter is, for example, visualized using a functional magnetic resonance imaging (fMRI) scanner, or measured with techniques such as high-density electrodes or calcium imaging. The equivalent of neural space in AI is the activity of the nodes within the corresponding artificial neural network. The conceptual space consists of pairs of states from the task space and the neural space. These pairs therefore represent the status of an individual, which links internal processes to external influences. The current state is constantly changing by moving to the next state with a certain probability. These combinations of an individual’s experiences on the one hand and the associated brain activity on the other, as well as the dynamic transitions, make the individual internal world models scientifically tangible.

See also  Sharks have depleted functional diversity compared to the past 66 million years

Eliminate deficiencies in internal world models

Using the formalized view, scientists can now analyze internal world models across disciplinary boundaries and discuss how they arise and evolve. For example, findings from research on humans and animals should help to improve AI. For example, current AI systems are not yet able to check the plausibility of their predictions. Even large language models like ChatGPT have so far only functioned as pattern recognition engines without the ability to actually plan. However, planning is important to play out and correct strategies in unknown situations before they are implemented and potentially cause damage. Researchers also suspect that deficiencies in internal world models may be the cause of some mental illnesses such as depression or schizophrenia. A deeper understanding of world models could help to target medication and therapy.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *