Artificial neural networks learn better when

Artificial Neural Networks

Image: Artificial neural networks are computing systems inspired by biological neural networks representing animal brains. Like biological models, they can learn (be trained) by processing examples and forming probabilistic associations, and then apply that information to other tasks.
outlook more

Credit: Neuroscience News

Depending on age, humans need 7 to 13 hours of sleep in 24 hours. A lot happens during this time: heart rate, breathing and metabolism fluctuate and flow; hormone levels adjust; the body relaxes. Not so much in the brain.

“The brain is very busy when we sleep, repeating what we’ve learned during the day,” said Maxim Bazhenov, PhD, professor of medicine and sleep researcher at the University of California San Diego School of Medicine. “Sleep helps reorganize memories and presents them in the most efficient way.”

In previous published work, Bazhenov and colleagues have reported how sleep builds rational memory, the ability to remember arbitrary or indirect associations between objects, people, or events, and protects against oblivion of old memories.

Artificial neural networks use the architecture of the human brain to improve numerous technologies and systems, from basic research and medicine to finance and social media. In some ways they have achieved superhuman feats, such as computational speed, but they fail in a key aspect: when artificial neural networks learn sequentially, new information overwrites previous information, a phenomenon called catastrophic forgetting.

“In contrast, the human brain continuously learns and integrates new data into existing knowledge,” Bazhenov said, “and it typically learns best when new training is dovetailed with periods of sleep for memory consolidation.”

Writing in the November 18, 2022 issue of PLOS Computational Biology, Senior author Bazhenov and colleagues discuss how biological models can help mitigate the risk of catastrophic forgetting in artificial neural networks and increase their utility across a wide range of research interests.

The scientists used spiking neural networks, which artificially mimic natural neural systems: instead of continuously communicating information, it is transmitted as discrete events (spikes) at specific points in time.

They found that when the spiking networks were trained on a new task but with occasional offline periods that mimicked sleep, catastrophic forgetfulness was mitigated. Like the human brain, the study authors said, the “sleep” for the networks allowed them to replay old memories without explicitly using old training data.

Memories are represented in the human brain by patterns of synaptic weight – the strength or amplitude of a connection between two neurons.

“When we get new information,” Bazhenov said, “neurons fire in a specific order and this increases the synapses between them. During sleep, the spiking patterns learned during our waking states are spontaneously repeated. It’s called reactivation or replay.

“Synaptic plasticity, the ability to change or shape, is still present during sleep and may further enhance the synaptic weight patterns that represent memory and help prevent forgetting, or the transfer of knowledge from old to new.” to enable tasks.”

When Bazhenov and colleagues applied this approach to artificial neural networks, they found that it helped the networks avoid catastrophic forgetting.

“It meant that these networks, like humans or animals, could continuously learn. Understanding how the human brain processes information during sleep may help expand human subjects’ memory. An improved sleep pattern can lead to a better memory.

“In other projects, we use computer models to develop optimal strategies for applying stimulation during sleep, such as B. Acoustic tones that improve sleep rhythm and improve learning. This can be particularly important when memory is not optimal, e.g. B. when memory declines with age or in certain diseases such as Alzheimer’s disease.”

Co-authors include: Ryan Golden and Jean Erik Delanois, both at UC San Diego; and Pavel Sanda, Institute of Informatics of the Czech Academy of Sciences.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of the press releases published on EurekAlert! by contributing institutions or for the use of information about the EurekAlert system.