3 Sources
[1]
DNA to AI: How Evolution Shapes Smarter Algorithms - Neuroscience News
Summary: A new AI algorithm inspired by the genome's ability to compress vast information offers insights into brain function and potential tech applications. Researchers found that this algorithm performs tasks like image recognition and video games almost as effectively as fully trained AI networks. By mimicking how genomes encode complex behaviors with limited data, the model highlights the evolutionary advantage of efficient information compression. The findings suggest new pathways for developing advanced, lightweight AI systems capable of running on smaller devices like smartphones. In a sense, each of us begins life ready for action. Many animals perform amazing feats soon after they're born. Spiders spin webs. Whales swim. But where do these innate abilities come from? Obviously, the brain plays a key role as it contains the trillions of neural connections needed to control complex behaviors. However, the genome has space for only a small fraction of that information. This paradox has stumped scientists for decades. Now, Cold Spring Harbor Laboratory (CSHL) Professors Anthony Zador and Alexei Koulakov have devised a potential solution using artificial intelligence. When Zador first encounters this problem, he puts a new spin on it. "What if the genome's limited capacity is the very thing that makes us so smart?" he wonders. "What if it's a feature, not a bug?" In other words, maybe we can act intelligently and learn quickly because the genome's limits force us to adapt. This is a big, bold idea -- tough to demonstrate. After all, we can't stretch lab experiments across billions of years of evolution. That's where the idea of the genomic bottleneck algorithm emerges. In AI, generations don't span decades. New models are born with the push of a button. Zador, Koulakov, and CSHL postdocs Divyansha Lachi and Sergey Shuvaev set out to develop a computer algorithm that folds heaps of data into a neat package -- much like our genome might compress the information needed to form functional brain circuits. They then test this algorithm against AI networks that undergo multiple training rounds. Amazingly, they find the new, untrained algorithm performs tasks like image recognition almost as effectively as state-of-the-art AI. Their algorithm even holds its own in video games like Space Invaders. It's as if it innately understands how to play. Does this mean AI will soon replicate our natural abilities? "We haven't reached that level," says Koulakov. "The brain's cortical architecture can fit about 280 terabytes of information -- 32 years of high-definition video. Our genomes accommodate about one hour. This implies a 400,000-fold compression technology cannot yet match." Nevertheless, the algorithm allows for compression levels thus far unseen in AI. That feature could have impressive uses in tech. Shuvaev, the study's lead author, explains: "For example, if you wanted to run a large language model on a cell phone, one way [the algorithm] could be used is to unfold your model layer by layer on the hardware." Such applications could mean more evolved AI with faster runtimes. And to think, it only took 3.5 billion years of evolution to get here. Encoding innate ability through a genomic bottleneck Animals are born with extensive innate behavioral capabilities, which arise from neural circuits encoded in the genome. However, the information capacity of the genome is orders of magnitude smaller than that needed to specify the connectivity of an arbitrary brain circuit, indicating that the rules encoding circuit formation must fit through a "genomic bottleneck" as they pass from one generation to the next. Here, we formulate the problem of innate behavioral capacity in the context of artificial neural networks in terms of lossy compression of the weight matrix. We find that several standard network architectures can be compressed by several orders of magnitude, yielding pretraining performance that can approach that of the fully trained network. Interestingly, for complex but not for simple test problems, the genomic bottleneck algorithm also captures essential features of the circuit, leading to enhanced transfer learning to novel tasks and datasets. Our results suggest that compressing a neural circuit through the genomic bottleneck serves as a regularizer, enabling evolution to select simple circuits that can be readily adapted to important real-world tasks. The genomic bottleneck also suggests how innate priors can complement conventional approaches to learning in designing algorithms for AI.
[2]
The next evolution of AI begins with ours: Neuroscientists devise a potential explanation for innate ability
In a sense, each of us begins life ready for action. Many animals perform amazing feats soon after they're born. Spiders spin webs. Whales swim. But where do these innate abilities come from? Obviously, the brain plays a key role as it contains the trillions of neural connections needed to control complex behaviors. However, the genome has space for only a small fraction of that information. This paradox has stumped scientists for decades. Now, Cold Spring Harbor Laboratory (CSHL) Professors Anthony Zador and Alexei Koulakov have devised a potential solution using artificial intelligence. When Zador first encounters this problem, he puts a new spin on it. "What if the genome's limited capacity is the very thing that makes us so smart?" he wonders. "What if it's a feature, not a bug?" In other words, maybe we can act intelligently and learn quickly because the genome's limits force us to adapt. This is a big, bold idea -- tough to demonstrate. After all, we can't stretch lab experiments across billions of years of evolution. That's where the idea of the genomic bottleneck algorithm emerges. In AI, generations don't span decades. New models are born with the push of a button. Zador, Koulakov, and CSHL postdocs Divyansha Lachi and Sergey Shuvaev set out to develop a computer algorithm that folds heaps of data into a neat package -- much like our genome might compress the information needed to form functional brain circuits. They then test this algorithm against AI networks that undergo multiple training rounds. The study is published in the journal Proceedings of the National Academy of Sciences. Amazingly, they find the new, untrained algorithm performs tasks like image recognition almost as effectively as state-of-the-art AI. Their algorithm even holds its own in video games like Space Invaders. It's as if it innately understands how to play. Does this mean AI will soon replicate our natural abilities? "We haven't reached that level," says Koulakov. "The brain's cortical architecture can fit about 280 terabytes of information -- 32 years of high-definition video. Our genomes accommodate about one hour. This implies a 400,000-fold compression technology cannot yet match." Nevertheless, the algorithm allows for compression levels thus far unseen in AI. That feature could have impressive uses in tech. Shuvaev, the study's lead author, explains, "For example, if you wanted to run a large language model on a cell phone, one way [the algorithm] could be used is to unfold your model layer by layer on the hardware." Such applications could mean more evolved AI with faster runtimes. And to think, it only took 3.5 billion years of evolution to get here.
[3]
The next evolution of AI begins with ours
The genome has space for only a small fraction of the information needed to control complex behaviors. So then how, for example, does a newborn sea turtle instinctually know to follow the moonlight? Cold Spring Harbor neuroscientists have devised a potential explanation for this age-old paradox. Their ideas should lead to faster, more evolved forms of artificial intelligence. In a sense, each of us begins life ready for action. Many animals perform amazing feats soon after they're born. Spiders spin webs. Whales swim. But where do these innate abilities come from? Obviously, the brain plays a key role as it contains the trillions of neural connections needed to control complex behaviors. However, the genome has space for only a small fraction of that information. This paradox has stumped scientists for decades. Now, Cold Spring Harbor Laboratory (CSHL) Professors Anthony Zador and Alexei Koulakov have devised a potential solution using artificial intelligence. When Zador first encounters this problem, he puts a new spin on it. "What if the genome's limited capacity is the very thing that makes us so smart?" he wonders. "What if it's a feature, not a bug?" In other words, maybe we can act intelligently and learn quickly because the genome's limits force us to adapt. This is a big, bold idea -- tough to demonstrate. After all, we can't stretch lab experiments across billions of years of evolution. That's where the idea of the genomic bottleneck algorithm emerges. In AI, generations don't span decades. New models are born with the push of a button. Zador, Koulakov, and CSHL postdocs Divyansha Lachi and Sergey Shuvaev set out to develop a computer algorithm that folds heaps of data into a neat package -- much like our genome might compress the information needed to form functional brain circuits. They then test this algorithm against AI networks that undergo multiple training rounds. Amazingly, they find the new, untrained algorithm performs tasks like image recognition almost as effectively as state-of-the-art AI. Their algorithm even holds its own in video games like Space Invaders. It's as if it innately understands how to play. Does this mean AI will soon replicate our natural abilities? "We haven't reached that level," says Koulakov. "The brain's cortical architecture can fit about 280 terabytes of information -- 32 years of high-definition video. Our genomes accommodate about one hour. This implies a 400,000-fold compression technology cannot yet match." Nevertheless, the algorithm allows for compression levels thus far unseen in AI. That feature could have impressive uses in tech. Shuvaev, the study's lead author, explains: "For example, if you wanted to run a large language model on a cell phone, one way [the algorithm] could be used is to unfold your model layer by layer on the hardware." Such applications could mean more evolved AI with faster runtimes. And to think, it only took 3.5 billion years of evolution to get here.
Share
Copy Link
Researchers at Cold Spring Harbor Laboratory develop a new AI algorithm inspired by genomic compression, potentially revolutionizing AI efficiency and explaining innate abilities in animals.
Researchers at Cold Spring Harbor Laboratory (CSHL) have developed a groundbreaking AI algorithm that draws inspiration from the genome's ability to compress vast amounts of information. This innovative approach, dubbed the "genomic bottleneck algorithm," not only offers insights into brain function but also presents potential applications for more efficient AI systems 1.
For decades, scientists have grappled with a fundamental paradox: how do animals possess complex innate abilities despite the limited information capacity of their genomes? CSHL Professors Anthony Zador and Alexei Koulakov propose that this limitation might be a feature rather than a bug, forcing adaptability and intelligent behavior 2.
The research team, including postdocs Divyansha Lachi and Sergey Shuvaev, developed an algorithm that compresses large amounts of data into a compact format, mimicking how genomes might encode information for functional brain circuits. When tested against conventional AI networks, this untrained algorithm demonstrated remarkable performance in tasks such as image recognition and even video game playing 3.
The genomic bottleneck algorithm achieved compression levels previously unseen in AI, performing almost as effectively as state-of-the-art, fully trained AI networks in various tasks. This efficiency opens up possibilities for running complex AI models on smaller devices like smartphones, potentially revolutionizing mobile AI applications 1.
While the algorithm doesn't yet match the brain's full capabilities, it represents a significant step forward in understanding both biological and artificial intelligence. Koulakov notes that the human brain can store about 280 terabytes of information, while our genomes can only accommodate about an hour's worth, implying a 400,000-fold compression that current technology cannot yet match 2.
The genomic bottleneck algorithm suggests new pathways for developing advanced, lightweight AI systems. It also provides a fresh perspective on how innate priors can complement conventional learning approaches in AI design. As Shuvaev, the study's lead author, explains, this could lead to more evolved AI with faster runtimes and broader applications 3.
This breakthrough not only advances our understanding of biological information processing but also paves the way for more efficient and adaptable AI systems, bridging the gap between natural and artificial intelligence.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
6 hrs ago
9 Sources
Technology
6 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
22 hrs ago
7 Sources
Technology
22 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
14 hrs ago
6 Sources
Technology
14 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
14 hrs ago
3 Sources
Health
14 hrs ago