4 Sources
[1]
Energy and memory: A new neural network paradigm
Listen to the first notes of an old, beloved song. Can you name that tune? If you can, congratulations -- it's a triumph of your associative memory, in which one piece of information (the first few notes) triggers the memory of the entire pattern (the song), without you actually having to hear the rest of the song again. We use this handy neural mechanism to learn, remember, solve problems and generally navigate our reality. "It's a network effect," said UC Santa Barbara mechanical engineering professor Francesco Bullo, explaining that associative memories aren't stored in single brain cells. "Memory storage and memory retrieval are dynamic processes that occur over entire networks of neurons." In 1982 physicist John Hopfield translated this theoretical neuroscience concept into the artificial intelligence realm, with the formulation of the Hopfield network. In doing so, not only did he provide a mathematical framework for understanding memory storage and retrieval in the human brain, he also developed one of the first recurrent artificial neural networks -- the Hopfield network -- known for its ability to retrieve complete patterns from noisy or incomplete inputs. Hopfield won the Nobel Prize for his work in 2024. However, according to Bullo and collaborators Simone Betteti, Giacomo Baggio and Sandro Zampieri at the University of Padua in Italy, the traditional Hopfield network model is powerful, but it doesn't tell the full story of how new information guides memory retrieval. "Notably," they say in a paper published in the journal Science Advances, "the role of external inputs has largely been unexplored, from their effects on neural dynamics to how they facilitate effective memory retrieval." The researchers suggest a model of memory retrieval they say is more descriptive of how we experience memory. "The modern version of machine learning systems, these large language models -- they don't really model memories," Bullo explained. "You put in a prompt and you get an output. But it's not the same way in which we understand and handle memories in the animal world." While LLMs can return responses that can sound convincingly intelligent, drawing upon the patterns of the language they are fed, they still lack the underlying reasoning and experience of the physical real world that animals have. "The way in which we experience the world is something that is more continuous and less start-and-reset," said Betteti, lead author of the paper. Most of the treatments on the Hopfield model tended to treat the brain as if it was a computer, he added, with a very mechanistic perspective. "Instead, since we are working on a memory model, we want to start with a human perspective." The main question inspiring the theorists was: As we experience the world that surrounds us, how do the signals we receive enable us to retrieve memories? As Hopfield envisioned, it helps to conceptualize memory retrieval in terms of an energy landscape, in which the valleys are energy minima that represent memories. Memory retrieval is like exploring this landscape; recognition is when you fall into one of the valleys. Your starting position in the landscape is your initial condition. "Imagine you see a cat's tail," Bullo said. "Not the entire cat, but just the tail. An associative memory system should be able to recover the memory of the entire cat." According to the traditional Hopfield model, the cat's tail (stimulus) is enough to put you closest to the valley labeled "cat," he explained, treating the stimulus as an initial condition. But how did you get to that spot in the first place? "The classic Hopfield model does not carefully explain how seeing the tail of the cat puts you in the right place to fall down the hill and reach the energy minimum," Bullo said. "How do you move around in the space of neural activity where you are storing these memories? It's a little bit unclear." The researchers' Input-Driven Plasticity (IDP) model aims to address this lack of clarity with a mechanism that gradually integrates past and new information, guiding the memory retrieval process to the correct memory. Instead of applying the two-step algorithmic memory retrieval on the rather static energy landscape of the original Hopfield network model, the researchers describe a dynamic, input-driven mechanism. "We advocate for the idea that as the stimulus from the external world is received (e.g., the image of the cat tail), it changes the energy landscape at the same time," Bullo said. "The stimulus simplifies the energy landscape so that no matter what your initial position, you will roll down to the correct memory of the cat." Additionally, the researchers say, the IDP model is robust to noise -- situations where the input is vague, ambiguous, or partially obscured -- and in fact uses the noise as a means to filter out less stable memories (the shallower valleys of this energy landscape) in favor of the more stable ones. "We start with the fact that when you're gazing at a scene your gaze shifts in between the different components of the scene," Betteti said. "So at every instant in time you choose what you want to focus on but you have a lot of noise around." Once you lock into the input to focus on, the network adjusts itself to prioritize it, he explained. Choosing what stimulus to focus on, a.k.a. attention, is also the main mechanism behind another neural network architecture, the transformer, which has become the heart of large language models like ChatGPT. While the IDP model the researchers propose "starts from a very different initial point with a different aim," Bullo said, there's a lot of potential for the model to be helpful in designing future machine learning systems. "We see a connection between the two, and the paper describes it," Bullo said. "It is not the main focus of the paper, but there is this wonderful hope that these associative memory systems and large language models may be reconciled."
[2]
Rewiring Memory: A New Model That Learns Like a Human Brain - Neuroscience News
Summary: A new memory model called Input-Driven Plasticity (IDP) offers a more human-like explanation for how external stimuli help us retrieve memories, building on the foundations of the classic Hopfield network. Unlike traditional models, which assume memory recall happens from a fixed starting point, the IDP framework describes how stimuli reshape the brain's "energy landscape" in real time to guide memory retrieval. This dynamic approach better reflects how we remember things in real life, like recognizing a cat from just its tail. The model is also robust to noise, filtering out weak memories in favor of stable, meaningful ones, offering insights for future AI systems. Listen to the first notes of an old, beloved song. Can you name that tune? If you can, congratulations -- it's a triumph of your associative memory, in which one piece of information (the first few notes) triggers the memory of the entire pattern (the song), without you actually having to hear the rest of the song again. We use this handy neural mechanism to learn, remember, solve problems and generally navigate our reality. "It's a network effect," said UC Santa Barbara mechanical engineering professor Francesco Bullo, explaining that associative memories aren't stored in single brain cells. "Memory storage and memory retrieval are dynamic processes that occur over entire networks of neurons." In 1982 physicist John Hopfield translated this theoretical neuroscience concept into the artificial intelligence realm, with the formulation of the Hopfield network. In doing so, not only did he provide a mathematical framework for understanding memory storage and retrieval in the human brain, he also developed one of the first recurrent artificial neural networks -- the Hopfield network -- known for its ability to retrieve complete patterns from noisy or incomplete inputs. Hopfield won the Nobel Prize for his work in 2024. However, according to Bullo and collaborators Simone Betteti, Giacomo Baggio and Sandro Zampieri at the University of Padua in Italy, the traditional Hopfield network model is powerful, but it doesn't tell the full story of how new information guides memory retrieval. "Notably," they say in a paper published in the journal Science Advances, "the role of external inputs has largely been unexplored, from their effects on neural dynamics to how they facilitate effective memory retrieval." The researchers suggest a model of memory retrieval they say is more descriptive of how we experience memory. "The modern version of machine learning systems, these large language models -- they don't really model memories," Bullo explained. "You put in a prompt and you get an output. But it's not the same way in which we understand and handle memories in the animal world." While LLMs can return responses that can sound convincingly intelligent, drawing upon the patterns of the language they are fed, they still lack the underlying reasoning and experience of the physical real world that animals have. "The way in which we experience the world is something that is more continuous and less start-and-reset," said Betteti, lead author of the paper. Most of the treatments on the Hopfield model tended to treat the brain as if it was a computer, he added, with a very mechanistic perspective. "Instead, since we are working on a memory model, we want to start with a human perspective." The main question inspiring the theorists was: As we experience the world that surrounds us, how do the signals we receive enable us to retrieve memories? As Hopfield envisioned, it helps to conceptualize memory retrieval in terms of an energy landscape, in which the valleys are energy minima that represent memories. Memory retrieval is like exploring this landscape; recognition is when you fall into one of the valleys. Your starting position in the landscape is your initial condition. "Imagine you see a cat's tail," Bullo said. "Not the entire cat, but just the tail. An associative memory system should be able to recover the memory of the entire cat." According to the traditional Hopfield model, the cat's tail (stimulus) is enough to put you closest to the valley labeled "cat," he explained, treating the stimulus as an initial condition. But how did you get to that spot in the first place? "The classic Hopfield model does not carefully explain how seeing the tail of the cat puts you in the right place to fall down the hill and reach the energy minimum," Bullo said. "How do you move around in the space of neural activity where you are storing these memories? It's a little bit unclear." The researchers' Input-Driven Plasticity (IDP) model aims to address this lack of clarity with a mechanism that gradually integrates past and new information, guiding the memory retrieval process to the correct memory. Instead of applying the two-step algorithmic memory retrieval on the rather static energy landscape of the original Hopfield network model, the researchers describe a dynamic, input-driven mechanism. "We advocate for the idea that as the stimulus from the external world is received (e.g., the image of the cat tail), it changes the energy landscape at the same time," Bullo said. "The stimulus simplifies the energy landscape so that no matter what your initial position, you will roll down to the correct memory of the cat." Additionally, the researchers say, the IDP model is robust to noise -- situations where the input is vague, ambiguous, or partially obscured -- and in fact uses the noise as a means to filter out less stable memories (the shallower valleys of this energy landscape) in favor of the more stable ones. "We start with the fact that when you're gazing at a scene your gaze shifts in between the different components of the scene," Betteti said. "So at every instant in time you choose what you want to focus on but you have a lot of noise around." Once you lock into the input to focus on, the network adjusts itself to prioritize it, he explained. Choosing what stimulus to focus on, a.k.a. attention, is also the main mechanism behind another neural network architecture, the transformer, which has become the heart of large language models like ChatGPT. While the IDP model the researchers propose "starts from a very different initial point with a different aim," Bullo said, there's a lot of potential for the model to be helpful in designing future machine learning systems. "We see a connection between the two, and the paper describes it," Bullo said. "It is not the main focus of the paper, but there is this wonderful hope that these associative memory systems and large language models may be reconciled." Input-Driven Dynamics for Robust Memory Retrieval in Hopfield Networks The Hopfield model provides a mathematical framework for understanding the mechanisms of memory storage and retrieval in the human brain. This model has inspired decades of research on learning and retrieval dynamics, capacity estimates, and sequential transitions among memories. Notably, the role of external inputs has been largely underexplored, from their effects on neural dynamics to how they facilitate effective memory retrieval. To bridge this gap, we propose a dynamical system framework in which the external input directly influences the neural synapses and shapes the energy landscape of the Hopfield model. This plasticity-based mechanism provides a clear energetic interpretation of the memory retrieval process and proves effective at correctly classifying mixed inputs. Furthermore, we integrate this model within the framework of modern Hopfield architectures to elucidate how current and past information are combined during the retrieval process. Last, we embed both the classic and the proposed model in an environment disrupted by noise and compare their robustness during memory retrieval.
[3]
Energy and memory: A new neural network paradigm
Listen to the first notes of an old, beloved song. Can you name that tune? If you can, congratulations -- it's a triumph of your associative memory, in which one piece of information (the first few notes) triggers the memory of the entire pattern (the song), without you actually having to hear the rest of the song again. We use this handy neural mechanism to learn, remember, solve problems and generally navigate our reality. "It's a network effect," said UC Santa Barbara mechanical engineering professor Francesco Bullo, explaining that associative memories aren't stored in single brain cells. "Memory storage and memory retrieval are dynamic processes that occur over entire networks of neurons." In 1982, physicist John Hopfield translated this theoretical neuroscience concept into the artificial intelligence realm, with the formulation of the Hopfield network. In doing so, not only did he provide a mathematical framework for understanding memory storage and retrieval in the human brain, he also developed one of the first recurrent artificial neural networks -- the Hopfield network -- known for its ability to retrieve complete patterns from noisy or incomplete inputs. Hopfield won the Nobel Prize for his work in 2024. However, according to Bullo and collaborators Simone Betteti, Giacomo Baggio and Sandro Zampieri at the University of Padua in Italy, the traditional Hopfield network model is powerful, but it doesn't tell the full story of how new information guides memory retrieval. "Notably," they say in a paper published in the journal Science Advances, "the role of external inputs has largely been unexplored, from their effects on neural dynamics to how they facilitate effective memory retrieval." The researchers suggest a model of memory retrieval they say is more descriptive of how we experience memory. "The modern version of machine learning systems, these large language models -- they don't really model memories," Bullo explained. "You put in a prompt and you get an output. But it's not the same way in which we understand and handle memories in the animal world." While LLMs can return responses that can sound convincingly intelligent, drawing upon the patterns of the language they are fed, they still lack the underlying reasoning and experience of the physical real world that animals have. "The way in which we experience the world is something that is more continuous and less start-and-reset," said Betteti, lead author of the paper. Most of the treatments on the Hopfield model tended to treat the brain as if it was a computer, he added, with a very mechanistic perspective. "Instead, since we are working on a memory model, we want to start with a human perspective." The main question inspiring the theorists was: As we experience the world that surrounds us, how do the signals we receive enable us to retrieve memories? As Hopfield envisioned, it helps to conceptualize memory retrieval in terms of an energy landscape, in which the valleys are energy minima that represent memories. Memory retrieval is like exploring this landscape; recognition is when you fall into one of the valleys. Your starting position in the landscape is your initial condition. "Imagine you see a cat's tail," Bullo said. "Not the entire cat, but just the tail. An associative memory system should be able to recover the memory of the entire cat." According to the traditional Hopfield model, the cat's tail (stimulus) is enough to put you closest to the valley labeled "cat," he explained, treating the stimulus as an initial condition. But how did you get to that spot in the first place? "The classic Hopfield model does not carefully explain how seeing the tail of the cat puts you in the right place to fall down the hill and reach the energy minimum," Bullo said. "How do you move around in the space of neural activity where you are storing these memories? It's a little bit unclear." The researchers' Input-Driven Plasticity (IDP) model aims to address this lack of clarity with a mechanism that gradually integrates past and new information, guiding the memory retrieval process to the correct memory. Instead of applying the two-step algorithmic memory retrieval on the rather static energy landscape of the original Hopfield network model, the researchers describe a dynamic, input-driven mechanism. "We advocate for the idea that as the stimulus from the external world is received (e.g., the image of the cat's tail), it changes the energy landscape at the same time," Bullo said. "The stimulus simplifies the energy landscape so that no matter what your initial position, you will roll down to the correct memory of the cat." Additionally, the researchers say, the IDP model is robust to noise -- situations where the input is vague, ambiguous, or partially obscured -- and in fact, uses the noise as a means to filter out less stable memories (the shallower valleys of this energy landscape) in favor of the more stable ones. "We start with the fact that when you're gazing at a scene your gaze shifts in between the different components of the scene," Betteti said. "So at every instant in time you choose what you want to focus on but you have a lot of noise around." Once you lock into the input to focus on, the network adjusts itself to prioritize it, he explained. Choosing what stimulus to focus on, a.k.a. attention, is also the main mechanism behind another neural network architecture, the transformer, which has become the heart of large language models like ChatGPT. While the IDP model the researchers propose "starts from a very different initial point with a different aim," Bullo said, there's a lot of potential for the model to be helpful in designing future machine learning systems. "We see a connection between the two, and the paper describes it," Bullo said. "It is not the main focus of the paper, but there is this wonderful hope that these associative memory systems and large language models may be reconciled."
[4]
New IDP model rethinks how our brains actually retrieve memories
Ever wondered how hearing just the first few notes of a familiar song can instantly bring the entire melody to mind? This marvel of cognition is known as associative memory, a fundamental neural mechanism that allows us to link pieces of information, retrieve complete patterns from partial cues, and navigate the complexities of our world. Now, researchers are proposing a new model that offers deeper insights into how our brains achieve this, particularly how external stimuli actively guide the memory retrieval process - an aspect they believe has been largely overlooked in traditional AI models of memory. Associative memory is not just about recalling songs; it's a cornerstone of learning, problem-solving, and our general ability to make sense of reality. When one piece of information triggers the memory of a larger, related pattern - like a scent evoking a childhood memory, or a single word bringing forth a complex concept - associative memory is at work. This isn't a simple one-to-one storage system. "It's a network effect," explained UC Santa Barbara mechanical engineering professor Francesco Bullo. "Memory storage and memory retrieval are dynamic processes that occur over entire networks of neurons." These intricate neural networks allow for the robust and flexible recall that characterizes human memory. In 1982, physicist John Hopfield, who was awarded the Nobel Prize for his work in 2024, famously translated this neuroscience concept into the realm of artificial intelligence with the Hopfield network. This was a landmark achievement, providing a mathematical framework to understand memory storage and retrieval. The Hopfield network, one of the first recurrent artificial neural networks, became renowned for its ability to retrieve complete patterns even when presented with noisy or incomplete inputs, much like our own brains. Despite the power of the traditional Hopfield network, Professor Bullo and his collaborators -- Simone Betteti, Giacomo Baggio, and Sandro Zampieri at the University of Padua in Italy -- argue that it doesn't fully capture the nuances of how new, incoming information steers the memory retrieval process. In their paper published in the journal Science Advances, they state, "Notably, the role of external inputs has largely been unexplored, from their effects on neural dynamics to how they facilitate effective memory retrieval." The researchers suggest that current AI models, including the very sophisticated Large Language Models (LLMs), don't truly replicate the way animal brains, including human ones, handle memories. "The modern version of machine learning systems, these large language models -- they don't really model memories," Bullo explained. "You put in a prompt and you get an output. But it's not the same way in which we understand and handle memories in the animal world." While LLMs can generate impressively coherent and intelligent-sounding responses based on the vast patterns in their training data, they lack the continuous, experience-based reasoning grounded in the physical world that animals possess. Simone Betteti, lead author of the paper, elaborated on this point: "The way in which we experience the world is something that is more continuous and less start-and-reset." He noted that many previous treatments of the Hopfield model adopted a mechanistic, computer-like perspective of the brain. "Instead, since we are working on a memory model, we want to start with a human perspective." The central question driving their research was: As we continuously perceive the world around us, how do these incoming signals enable us to retrieve relevant memories? AI may soon detect dyslexia early from children's handwriting Hopfield's original model conceptualizes memory retrieval using an "energy landscape" metaphor. In this landscape, memories are represented by valleys, or energy minima. The process of memory retrieval is akin to a ball rolling across this landscape until it settles into one of these valleys, signifying recognition or recall. Your starting point on this landscape is your initial condition, influenced by the cue you receive. "Imagine you see a cat's tail," Bullo illustrated. "Not the entire cat, but just the tail. An associative memory system should be able to recover the memory of the entire cat." According to the traditional Hopfield model, the stimulus of the cat's tail acts as an initial condition, placing your "ball" closest to the "cat" valley. However, a critical question remained. "The classic Hopfield model does not carefully explain how seeing the tail of the cat puts you in the right place to fall down the hill and reach the energy minimum," Bullo pointed out. "How do you move around in the space of neural activity where you are storing these memories? It's a little bit unclear." To address this lack of clarity, the researchers propose their Input-Driven Plasticity (IDP) model. This new framework introduces a mechanism where past and new information are gradually integrated, actively guiding the memory retrieval process towards the correct memory. Unlike the somewhat static energy landscape of the original Hopfield network where retrieval is a two-step algorithmic process, the IDP model describes a dynamic, input-driven mechanism. "We advocate for the idea that as the stimulus from the external world is received (e.g., the image of the cat tail), it changes the energy landscape at the same time," Bullo stated. In essence, the external input - the cat's tail - doesn't just give you a starting point; it actively reshapes the terrain. "The stimulus simplifies the energy landscape so that no matter what your initial position, you will roll down to the correct memory of the cat." A significant advantage of the IDP model is its robustness to noise. When an input is vague, ambiguous, or partially obscured, the model doesn't just falter. Instead, it can effectively use this "noise" to its advantage, filtering out less stable memories (the shallower valleys in the energy landscape) and favoring the more stable, deeply entrenched ones. Betteti provided an analogy: "We start with the fact that when you're gazing at a scene your gaze shifts in between the different components of the scene. So at every instant in time you choose what you want to focus on but you have a lot of noise around." Once you "lock into" an input to focus on, he explained, the network dynamically adjusts itself to prioritize that input, effectively sculpting the memory landscape in real-time. Transformers are the engines behind today's leading LLMs like ChatGPT. While the IDP model proposed by Bullo and his colleagues "starts from a very different initial point with a different aim," there's exciting potential for this new understanding of associative memory to inform the design of future machine learning systems. The way the IDP model handles continuous input and dynamically adjusts its internal state could offer new pathways for creating AI that learns and reasons in a more human-like, context-aware manner. "We see a connection between the two, and the paper describes it," Bullo said. "It is not the main focus of the paper, but there is this wonderful hope that these associative memory systems and large language models may be reconciled."
Share
Copy Link
Researchers propose a new Input-Driven Plasticity (IDP) model that offers a more human-like explanation for how external stimuli guide memory retrieval, building on the classic Hopfield network and potentially influencing future AI systems.
Researchers from UC Santa Barbara and the University of Padua have introduced a groundbreaking model called Input-Driven Plasticity (IDP) that promises to revolutionize our understanding of memory retrieval in both biological and artificial neural networks 123. This new model builds upon the foundational work of John Hopfield, who won the Nobel Prize in 2024 for his contributions to the field of artificial neural networks 123.
The classic Hopfield network, while powerful, has limitations in explaining how external stimuli guide the memory retrieval process. Francesco Bullo, a mechanical engineering professor at UC Santa Barbara, explains, "The classic Hopfield model does not carefully explain how seeing the tail of the cat puts you in the right place to fall down the hill and reach the energy minimum" 123. This gap in understanding has led researchers to develop a more nuanced model that better reflects human memory processes.
The IDP model introduces a dynamic, input-driven mechanism that gradually integrates past and new information to guide memory retrieval. Unlike the static energy landscape of the original Hopfield network, the IDP model proposes that external stimuli actively reshape the memory landscape 123.
Bullo elaborates, "We advocate for the idea that as the stimulus from the external world is received (e.g., the image of the cat's tail), it changes the energy landscape at the same time. The stimulus simplifies the energy landscape so that no matter what your initial position, you will roll down to the correct memory of the cat" 123.
A key feature of the IDP model is its robustness to noise. The model can effectively handle situations where input is vague, ambiguous, or partially obscured. Interestingly, it uses this noise to filter out less stable memories in favor of more stable ones 123.
Simone Betteti, the lead author of the study, explains, "We start with the fact that when you're gazing at a scene, your gaze shifts in between the different components of the scene. So at every instant in time, you choose what you want to focus on, but you have a lot of noise around" 13. This aspect of the model aligns with how humans process visual information and selectively attend to specific stimuli.
While current large language models (LLMs) have made significant strides in generating human-like responses, they still lack the continuous, experience-based reasoning grounded in the physical world that characterizes animal cognition. The IDP model offers a potential bridge between artificial and biological memory processes 123.
Bullo notes, "The modern version of machine learning systems, these large language models -- they don't really model memories. You put in a prompt and you get an output. But it's not the same way in which we understand and handle memories in the animal world" 123.
The researchers see potential connections between their IDP model and other neural network architectures, such as transformers, which are at the heart of many current AI systems. While the IDP model "starts from a very different initial point with a different aim," Bullo suggests that it could be instrumental in designing future machine learning systems 13.
As AI continues to evolve, models like IDP that draw inspiration from biological memory processes may play a crucial role in developing more sophisticated and human-like artificial intelligence systems. This research not only advances our understanding of memory retrieval but also opens new avenues for the future of AI and cognitive science.
OpenAI has acquired Jony Ive's AI hardware startup io for $6.5 billion, bringing the legendary Apple designer on board to lead creative and design efforts across the company's products, including potential AI-powered consumer devices.
51 Sources
Technology
2 hrs ago
51 Sources
Technology
2 hrs ago
Google's I/O 2025 event unveiled significant AI advancements, including Project Astra's enhanced capabilities and new Gemini features, demonstrating the company's vision for AI-powered future.
21 Sources
Technology
19 hrs ago
21 Sources
Technology
19 hrs ago
Google introduces AI Mode, a significant upgrade to its search engine that integrates advanced AI capabilities, promising a more conversational and intelligent search experience for users.
14 Sources
Technology
19 hrs ago
14 Sources
Technology
19 hrs ago
Google commits up to $150 million to collaborate with Warby Parker on developing AI-powered smart glasses based on Android XR, set to launch after 2025.
10 Sources
Technology
10 hrs ago
10 Sources
Technology
10 hrs ago
Google introduces Flow, an advanced AI filmmaking tool that combines Veo, Imagen, and Gemini models to revolutionize video creation and storytelling.
8 Sources
Technology
10 hrs ago
8 Sources
Technology
10 hrs ago