Curated by THEOUTPOST
On Tue, 8 Oct, 4:06 PM UTC
58 Sources
[1]
ET Explainer: What's up with AI and the 2024 Physics Nobel Prize?
Why were two computer scientists, pioneers no doubt in advancing machine learning (ML) that powers artificial intelligence (AI), being given the top-most prize for physics?When John Hopfield (top left) and Geoffrey Hinton (top right) were awarded the physics Nobel Prize for 2024, almost everyone was aghast. Why were two computer scientists, pioneers no doubt in advancing machine learning (ML) that powers artificial intelligence (AI), being given the top-most prize for physics? The short answer is that under the hood of ML lie some of the most intricate and elegant concepts from physics. It is prudent to take up John Hopfield's contributions because they are not very well understood outside of the realms of AI researchers. To really get to Hopfield's ideas we must think about how we humans arrive at conclusions, some intelligent and some stupid. Memory and intelligence are closely connected. Remember how when we see a particular colour/pattern we are reminded of someone. Sometimes just a shirt and its scent -- Heath Ledger in Brokeback Mountain-or a sponge cake -- madeleine and Marcel Proust- transport us back in time and place. This is called associative memory. There is an initial trigger, and all our brain cells collectively fish out "that memory". That's what a Hopfield network does. It goes from an "high excitement" state of trying to find a match till it settles down into a "chill, I got it state" once the match is found. Now let's consider Geoffrey Hinton's contribution.How do we learn anything? By making mistakes and then correcting the assumptions till we finally get the correct answers. Hinton used a technique from physics called the "Boltzman Machine (BM)". Simple things like "it looks cloudy maybe I should carry an umbrella" don't require too many layers of analysis. Hinton's clever use of BM can tackle multi-layered, complex decisions. Ergo, the 2024 Nobel. -- Shishir Prasad
[2]
Why the Nobel Prize in Physics Went to AI Research
The Nobel Prize Committee for Physics caught the academic community off-guard by handing the 2024 award to John J. Hopfield and Geoffrey E. Hinton for their foundational work in neural networks. The pair won the prize for their seminal papers, both published in the 1980s, that described rudimentary neural networks. Though much simpler than the networks used for modern generative AI like ChatGPT or Stable Diffusion, their ideas laid the foundations on which later research built. Even Hopfield and Hinton didn't believe they'd win, with the latter telling The Associated Press he was "flabbergasted." After all, AI isn't what comes to mind when most people think of physics. However, the committee took a broader view, in part because the researchers based their neural networks on "fundamental concepts and methods from physics." "Initially, I was surprised, given it's the Nobel Prize in Physics, and their work was in AI and machine learning," says Padhraic Smyth, a distinguished professor at the University of California, Irvine. "But thinking about it a bit more, it was clearer to me why [the Nobel Prize Committee] did this." He added that physicists in statistical mechanics have "long thought" about systems that display emergent behavior. Hopfield first explored these ideas in a 1982 paper on neural networks. He described a type of neural network, later called a Hopfield network, formed by a single layer of interconnected neurons. The paper, which was originally categorized under biophysics, said a neural network could retain "memories" from "any reasonably sized subpart." Hinton expanded on that work to conceptualize the Boltzmann machine, a more complex neural network described in a 1985 paper Hinton co-authored with David H. Ackley and Terrence J. Sejnowski. They introduced the concept of "hidden units," additional layers of neurons which exist between the input and output layers of a neural network but don't directly interact with either. This makes it possible to handle tasks that require a more generalized understanding, like classifying images. So, what's the connection to physics? Hopfield's paper references the concept of a "spin glass," a material in which disordered magnetic particles lead to complex interactions. Hinton and his co-authors drew on statistical mechanics, a field of physics that uses statistics to describe the behavior of particles in a system. They even named their network in honor of Ludwig Boltzmann, the physicist whose work formed the foundation of statistical mechanics. And the connection between neural networks and physics isn't a one-way street. Machine learning was crucial to the discovery of the Higgs boson, where it sorted the data generated by billions of proton collisions. This year's Nobel Prize for Chemistry further underscored machine learning's importance in research, as the award went to a trio of scientists who built an AI model to predict the structures of proteins. While Hopfield and Hinton authored influential papers, their contributions to machine learning were cemented by their continued work, and both won multiple awards before the Nobel Prize. Among others, Hopfield won the Boltzmann Medal in 2022; Hinton received the IEEE Frank Rosenblatt Award in 2014, the IEEE James Clerk Maxwell Medal in 2016, and the Turing Award in 2018 (that last one alongside Yann LeCun and Yoshua Bengio). Smyth saw Hopfield's efforts first-hand as a student at the California Institute of Technology. "Hopfield was able to bring together mathematicians, engineers, computer scientists, and physicists. He got them in the same room, got them excited about modeling the brain, doing pattern recognition and machine learning, unified by mathematical theories he brought in from physics." In 2012, Hinton co-founded a company called DNNResearch with two of his students; Ilya Sutskever, who later co-founded OpenAI, and Alex Krizhevsky. Together, the trio collaborated on AlexNet, a hugely influential neural network for computer vision. Hinton also taught at the University of Toronto, where he continued to champion machine learning. Navdeep Jaitly, now a deep learning researcher at Apple, said Hinton inspired new generations of engineers and researchers. In Jaitly's case, the influence was direct; Jaitly studied under Hinton at the University of Toronto. "I came in with experience in statistical modeling," says Jaitly, "but Hinton still managed to entirely change how I think about problem solving. In terms of his contributions to machine learning, his methods are central to almost everything we do."
[3]
Physics Nobel awarded to neural network pioneers who laid foundations for AI
The 2024 Nobel Prize in Physics has been awarded to scientists John Hopfield and Geoffrey Hinton "for foundational discoveries and inventions that enable machine learning with artificial neural networks." Inspired by ideas from physics and biology, Hopfield and Hinton developed computer systems that can memorize and learn from patterns in data. Despite never directly collaborating, they built on each other's work to develop the foundations of the current boom in machine learning and artificial intelligence (AI). What are neural networks? (And what do they have to do with physics?) Artificial neural networks are behind much of the AI technology we use today. In the same way your brain has neuronal cells linked by synapses, artificial neural networks have digital neurons connected in various configurations. Each individual neuron doesn't do much. Instead, the magic lies in the pattern and strength of the connections between them. Neurons in an artificial neural network are "activated" by input signals. These activations cascade from one neuron to the next in ways that can transform and process the input information. As a result, the network can carry out computational tasks such as classification, prediction and making decisions. Most of the history of machine learning has been about finding ever more sophisticated ways to form and update these connections between artificial neurons. While the foundational idea of linking together systems of nodes to store and process information came from biology, the mathematics used to form and update these links came from physics. Networks that can remember John Hopfield (born 1933) is a US theoretical physicist who made important contributions over his career in the field of biological physics. However, the Nobel Physics prize was for his work developing Hopfield networks in 1982. Hopfield networks were one of the earliest kinds of artificial neural networks. Inspired by principles from neurobiology and molecular physics, these systems demonstrated for the first time how a computer could use a "network" of nodes to remember and recall information. The networks Hopfield developed could memorize data (such as a collection of black and white images). These images could be "recalled" by association when the network is prompted with a similar image. Although of limited practical use, Hopfield networks demonstrated that this type of ANN could store and retrieve data in new ways. They laid the foundation for later work by Hinton. Machines that can learn Geoff Hinton (born 1947), sometimes called one of the "godfathers of AI", is a British-Canadian computer scientist who has made a number of important contributions to the field. In 2018, along with Yoshua Bengio and Yann LeCun, he was awarded the Turing Award (the highest honor in computer science) for his efforts to advance machine learning generally, and specifically a branch of it called deep learning. The Nobel Prize in Physics, however, is specifically for his work with Terrence Sejnowski and other colleagues in 1984, developing Boltzmann machines. These are an extension of the Hopfield network that demonstrated the idea of machine learning -- a system that lets a computer learn not from a programmer, but from examples of data. Drawing from ideas in the energy dynamics of statistical physics, Hinton showed how this early generative computer model could learn to store data over time by being shown examples of things to remember. The Boltzmann machine, like the Hopfield network before it, did not have immediate practical applications. However, a modified form (called the restricted Boltzmann machine) was useful in some applied problems. More important was the conceptual breakthrough that an artificial neural network could learn from data. Hinton continued to develop this idea. He later published influential papers on backpropagation (the learning process used in modern machine learning systems) and convolutional neural networks (the main type of neural network used today for AI systems that work with image and video data). Why this prize, now? Hopfield networks and Boltzmann machines seem whimsical compared to today's feats of AI. Hopfield's network contained only 30 neurons (he tried to make one with 100 nodes, but it was too much for the computing resources of the time), whereas modern systems such as ChatGPT can have millions. However, today's Nobel prize underscores just how important these early contributions were to the field. While recent rapid progress in AI -- familiar to most of us from generative AI systems such as ChatGPT -- might seem like vindication for the early proponents of neural networks, Hinton at least has expressed concern. In 2023, after quitting a decade-long stint at Google's AI branch, he said he was scared by the rate of development and joined the growing throng of voices calling for more proactive AI regulation. After receiving the Nobel prize, Hinton said AI will be "like the Industrial Revolution but instead of our physical capabilities, it's going to exceed our intellectual capabilities." He also said he still worries that the consequences of his work might be "systems that are more intelligent than us that might eventually take control."
[4]
Nobel Prize in Physics Awarded for Breakthroughs in Machine Learning
The 2024 Nobel Prize in Physics was given to John Hopfield and Geoffrey Hinton for development of techniques that laid the foundation for revolutionary advances in artificial intelligence The human brain, with its billions of interconnected neurons giving rise to consciousness, is generally considered the most powerful and flexible computer in the known universe. Yet for decades scientists have been seeking to change that via machine learning approaches that emulate the brain's adaptive computational prowess. The 2024 Nobel Prize in Physics was awarded on Tuesday to the U.S. scientist John Hopfield and the U.K. scientist Geoffrey Hinton, each of whom used the tools of physics to develop artificial neural networks that laid the foundations for many of today's most powerful artificial intelligence applications. Artificial neural networks are inspired by the brain's anatomical structure, and represent neurons by nodes that possess different values. These nodes form networks of connections, akin to the brain's natural neural synapses, which can be made stronger or weaker through training on any arbitrary data set. This adaptive response allows the artificial neural network to better recognize patterns within data and make subsequent predictions for the future -- that is, to learn without being explicitly programmed. John Hopfield, a professor at Princeton University in New Jersey, devised an associative memory -- the so-called Hopfield network -- that stores and recreates images and other patterns in data. Geoffrey Hinton, a professor at the University of Toronto, used Hopfield's method in tandem with a different approach, the Boltzmann machine, which excels at discerning patterns in data. Hinton's method can be used, for example, to classify images or create new instances of an observed pattern; the technique has helped fuel the ongoing explosion of progress in artificial intelligence that is transforming myriad sectors of human endeavor. If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. "This year's physics laureates' breakthroughs stand on the foundations of physical science," the Nobel committee said on X (formerly Twitter). "They have showed a completely new way for us to use computers to aid and to guide us to tackle many of the challenges our society face."
[5]
Nobel Prize in physics spotlights key breakthroughs in AI revolution - making machines that learn
University of Michigan provides funding as a founding partner of The Conversation US. If your jaw dropped as you watched the latest AI-generated video, your bank balance was saved from criminals by a fraud detection system, or your day was made a little easier because you were able to dictate a text message on the run, you have many scientists, mathematicians and engineers to thank. But two names stand out for foundational contributions to the deep learning technology that makes those experiences possible: Princeton University physicist John Hopfield and University of Toronto computer scientist Geoffrey Hinton. The two researchers were awarded the Nobel Prize in physics on Oct. 8, 2024, for their pioneering work in the field of artificial neural networks. Though artificial neural networks are modeled on biological neural networks, both researchers' work drew on statistical physics, hence the prize in physics. How a neuron computes Artificial neural networks owe their origins to studies of biological neurons in living brains. In 1943, neurophysiologist Warren McCulloch and logician Walter Pitts proposed a simple model of how a neuron works. In the McCulloch-Pitts model, a neuron is connected to its neighboring neurons and can receive signals from them. It can then combine those signals to send signals to other neurons. But there is a twist: It can weigh signals coming from different neighbors differently. Imagine that you are trying to decide whether to buy a new bestselling phone. You talk to your friends and ask them for their recommendations. A simple strategy is to collect all friend recommendations and decide to go along with whatever the majority says. For example, you ask three friends, Alice, Bob and Charlie, and they say yay, yay and nay, respectively. This leads you to a decision to buy the phone because you have two yays and one nay. However, you might trust some friends more because they have in-depth knowledge of technical gadgets. So you might decide to give more weight to their recommendations. For example, if Charlie is very knowledgeable, you might count his nay three times and now your decision is to not buy the phone - two yays and three nays. If you're unfortunate to have a friend whom you completely distrust in technical gadget matters, you might even assign them a negative weight. So their yay counts as a nay and their nay counts as a yay. Once you've made your own decision about whether the new phone is a good choice, other friends can ask you for your recommendation. Similarly, in artificial and biological neural networks, neurons can aggregate signals from their neighbors and send a signal to other neurons. This capability leads to a key distinction: Is there a cycle in the network? For example, if I ask Alice, Bob and Charlie today, and tomorrow Alice asks me for my recommendation, then there is a cycle: from Alice to me, and from me back to Alice. If the connections between neurons do not have a cycle, then computer scientists call it a feedforward neural network. The neurons in a feedforward network can be arranged in layers. The first layer consists of the inputs. The second layer receives its signals from the first layer and so on. The last layer represents the outputs of the network. However, if there is a cycle in the network, computer scientists call it a recurrent neural network, and the arrangements of neurons can be more complicated than in feedforward neural networks. Hopfield network The initial inspiration for artificial neural networks came from biology, but soon other fields started to shape their development. These included logic, mathematics and physics. The physicist John Hopfield used ideas from physics to study a particular type of recurrent neural network, now called the Hopfield network. In particular, he studied their dynamics: What happens to the network over time? Such dynamics are also important when information spreads through social networks. Everyone's aware of memes going viral and echo chambers forming in online social networks. These are all collective phenomena that ultimately arise from simple information exchanges between people in the network. Hopfield was a pioneer in using models from physics, especially those developed to study magnetism, to understand the dynamics of recurrent neural networks. He also showed that their dynamics can give such neural networks a form of memory. Boltzmann machines and backpropagation During the 1980s, Geoffrey Hinton, computational neurobiologist Terrence Sejnowski and others extended Hopfield's ideas to create a new class of models called Boltzmann machines, named for the 19th-century physicist Ludwig Boltzmann. As the name implies, the design of these models is rooted in the statistical physics pioneered by Boltzmann. Unlike Hopfield networks that could store patterns and correct errors in patterns - like a spellchecker does - Boltzmann machines could generate new patterns, thereby planting the seeds of the modern generative AI revolution. Hinton was also part of another breakthrough that happened in the 1980s: backpropagation. If you want artificial neural networks to do interesting tasks, you have to somehow choose the right weights for the connections between artificial neurons. Backpropagation is a key algorithm that makes it possible to select weights based on the performance of the network on a training dataset. However, it remained challenging to train artificial neural networks with many layers. In the 2000s, Hinton and his co-workers cleverly used Boltzmann machines to train multilayer networks by first pretraining the network layer by layer and then using another fine-tuning algorithm on top of the pretrained network to further adjust the weights. Multilayered networks were rechristened deep networks, and the deep learning revolution had begun. AI pays it back to physics The Nobel Prize in physics shows how ideas from physics contributed to the rise of deep learning. Now deep learning has begun to pay its due back to physics by enabling accurate and fast simulations of systems ranging from molecules and materials all the way to the entire Earth's climate. By awarding the Nobel Prize in physics to Hopfield and Hinton, the prize committee has signaled its hope in humanity's potential to use these advances to promote human well-being and to build a sustainable world.
[6]
Nobel Prize in physics spotlights key breakthroughs in AI revolution -- making machines that learn
If your jaw dropped as you watched the latest AI-generated video, your bank balance was saved from criminals by a fraud detection system, or your day was made a little easier because you were able to dictate a text message on the run, you have many scientists, mathematicians and engineers to thank. But two names stand out for foundational contributions to the deep learning technology that makes those experiences possible: Princeton University physicist John Hopfield and University of Toronto computer scientist Geoffrey Hinton. The two researchers were awarded the Nobel Prize in physics on Oct. 8, 2024, for their pioneering work in the field of artificial neural networks. Though artificial neural networks are modeled on biological neural networks, both researchers' work drew on statistical physics, hence the prize in physics. How a neuron computes Artificial neural networks owe their origins to studies of biological neurons in living brains. In 1943, neurophysiologist Warren McCulloch and logician Walter Pitts proposed a simple model of how a neuron works. In the McCulloch-Pitts model, a neuron is connected to its neighboring neurons and can receive signals from them. It can then combine those signals to send signals to other neurons. But there is a twist: It can weigh signals coming from different neighbors differently. Imagine that you are trying to decide whether to buy a new bestselling phone. You talk to your friends and ask them for their recommendations. A simple strategy is to collect all friend recommendations and decide to go along with whatever the majority says. For example, you ask three friends, Alice, Bob and Charlie, and they say yay, yay and nay, respectively. This leads you to a decision to buy the phone because you have two yays and one nay. However, you might trust some friends more because they have in-depth knowledge of technical gadgets. So you might decide to give more weight to their recommendations. For example, if Charlie is very knowledgeable, you might count his nay three times and now your decision is to not buy the phone -- two yays and three nays. If you're unfortunate to have a friend whom you completely distrust in technical gadget matters, you might even assign them a negative weight. So their yay counts as a nay and their nay counts as a yay. Once you've made your own decision about whether the new phone is a good choice, other friends can ask you for your recommendation. Similarly, in artificial and biological neural networks, neurons can aggregate signals from their neighbors and send a signal to other neurons. This capability leads to a key distinction: Is there a cycle in the network? For example, if I ask Alice, Bob and Charlie today, and tomorrow Alice asks me for my recommendation, then there is a cycle: from Alice to me, and from me back to Alice. If the connections between neurons do not have a cycle, then computer scientists call it a feedforward neural network. The neurons in a feedforward network can be arranged in layers. The first layer consists of the inputs. The second layer receives its signals from the first layer and so on. The last layer represents the outputs of the network. However, if there is a cycle in the network, computer scientists call it a recurrent neural network, and the arrangements of neurons can be more complicated than in feedforward neural networks. Hopfield network The initial inspiration for artificial neural networks came from biology, but soon other fields started to shape their development. These included logic, mathematics and physics. The physicist John Hopfield used ideas from physics to study a particular type of recurrent neural network, now called the Hopfield network. In particular, he studied their dynamics: What happens to the network over time? Such dynamics are also important when information spreads through social networks. Everyone's aware of memes going viral and echo chambers forming in online social networks. These are all collective phenomena that ultimately arise from simple information exchanges between people in the network. Hopfield was a pioneer in using models from physics, especially those developed to study magnetism, to understand the dynamics of recurrent neural networks. He also showed that their dynamics can give such neural networks a form of memory. Boltzmann machines and backpropagation During the 1980s, Geoffrey Hinton, computational neurobiologist Terrence Sejnowski and others extended Hopfield's ideas to create a new class of models called Boltzmann machines, named for the 19th-century physicist Ludwig Boltzmann. As the name implies, the design of these models is rooted in the statistical physics pioneered by Boltzmann. Unlike Hopfield networks that could store patterns and correct errors in patterns -- like a spellchecker does -- Boltzmann machines could generate new patterns, thereby planting the seeds of the modern generative AI revolution. Hinton was also part of another breakthrough that happened in the 1980s: backpropagation. If you want artificial neural networks to do interesting tasks, you have to somehow choose the right weights for the connections between artificial neurons. Backpropagation is a key algorithm that makes it possible to select weights based on the performance of the network on a training dataset. However, it remained challenging to train artificial neural networks with many layers. In the 2000s, Hinton and his co-workers cleverly used Boltzmann machines to train multilayer networks by first pretraining the network layer by layer and then using another fine-tuning algorithm on top of the pretrained network to further adjust the weights. Multilayered networks were rechristened deep networks, and the deep learning revolution had begun. AI pays it back to physics The Nobel Prize in physics shows how ideas from physics contributed to the rise of deep learning. Now deep learning has begun to pay its due back to physics by enabling accurate and fast simulations of systems ranging from molecules and materials all the way to the entire Earth's climate. By awarding the Nobel Prize in physics to Hopfield and Hinton, the prize committee has signaled its hope in humanity's potential to use these advances to promote human well-being and to build a sustainable world.
[7]
Physics Nobel awarded to neural network pioneers who laid foundations for AI
Queensland University of Technology provides funding as a member of The Conversation AU. The 2024 Nobel Prize in Physics has been awarded to scientists John Hopfield and Geoffrey Hinton "for foundational discoveries and inventions that enable machine learning with artificial neural networks". Inspired by ideas from physics and biology, Hopfield and Hinton developed computer systems that can memorise and learn from patterns in data. Despite never directly collaborating, they built on each other's work to develop the foundations of the current boom in machine learning and artificial intelligence (AI). What are neural networks? (And what do they have to do with physics?) Artificial neural networks are behind much of the AI technology we use today. In the same way your brain has neuronal cells linked by synapses, artificial neural networks have digital neurons connected in various configurations. Each individual neuron doesn't do much. Instead, the magic lies in the pattern and strength of the connections between them. Neurons in an artificial neural network are "activated" by input signals. These activations cascade from one neuron to the next in ways that can transform and process the input information. As a result, the network can carry out computational tasks such as classification, prediction and making decisions. Most of the history of machine learning has been about finding ever more sophisticated ways to form and update these connections between artificial neurons. While the foundational idea of linking together systems of nodes to store and process information came from biology, the mathematics used to form and update these links came from physics. Networks that can remember John Hopfield (born 1933) is a US theoretical physicist who made important contributions over his career in the field of biological physics. However, the Nobel Physics prize was for his work developing Hopfield networks in 1982. Hopfield networks were one of the earliest kinds of artificial neural networks. Inspired by principles from neurobiology and molecular physics, these systems demonstrated for the first time how a computer could use a "network" of nodes to remember and recall information. The networks Hopfield developed could memorise data (such as a collection of black and white images). These images could be "recalled" by association when the network is prompted with a similar image. Although of limited practical use, Hopfield networks demonstrated that this type of ANN could store and retrieve data in new ways. They laid the foundation for later work by Hinton. Machines that can learn Geoff Hinton (born 1947), sometimes called one of the "godfathers of AI", is a British-Canadian computer scientist who has made a number of important contributions to the field. In 2018, along with Yoshua Bengio and Yann LeCun, he was awarded the Turing Award (the highest honour in computer science) for his efforts to advance machine learning generally, and specifically a branch of it called deep learning. The Nobel Prize in Physics, however, is specifically for his work with Terrence Sejnowski and other colleagues in 1984, developing Boltzmann machines. These are an extension of the Hopfield network that demonstrated the idea of machine learning - a system that lets a computer learn not from a programmer, but from examples of data. Drawing from ideas in the energy dynamics of statistical physics, Hinton showed how this early generative computer model could learn to store data over time by being shown examples of things to remember. The Boltzmann machine, like the Hopfield network before it, did not have immediate practical applications. However, a modified form (called the restricted Boltzmann machine) was useful in some applied problems. More important was the conceptual breakthrough that an artificial neural network could learn from data. Hinton continued to develop this idea. He later published influential papers on backpropagation (the learning process used in modern machine learning systems) and convolutional neural networks (the main type of neural network used today for AI systems that work with image and video data). Why this prize, now? Hopfield networks and Boltzmann machines seem whimsical compared to today's feats of AI. Hopfield's network contained only 30 neurons (he tried to make one with 100 nodes, but it was too much for the computing resources of the time), whereas modern systems such as ChatGPT can have millions. However, today's Nobel prize underscores just how important these early contributions were to the field. While recent rapid progress in AI - familiar to most of us from generative AI systems such as ChatGPT - might seem like vindication for the early proponents of neural networks, Hinton at least has expressed concern. In 2023, after quitting a decade-long stint at Google's AI branch, he said he was scared by the rate of development and joined the growing throng of voices calling for more proactive AI regulation. After receiving the Nobel prize, Hinton said AI will be "like the Industrial Revolution but instead of our physical capabilities, it's going to exceed our intellectual capabilities". He also said he still worries that the consequences of his work might be "systems that are more intelligent than us that might eventually take control".
[8]
Physics Nobel scooped by machine-learning pioneers
Two researchers who developed machine-learning techniques that underpin today's boom in artificial intelligence have won the 2024 Nobel Prize in Physics. John Hopfield from Princeton University in New Jersey, and Geoffrey Hinton at the University of Toronto, Canada, share the 11 million Swedish kroner (US$1 million) prize, announced by the Royal Swedish Academy of Sciences in Stockholm on 8 October. Both used tools from physics to come up with methods that power artificial neural networks, which use brain-inspired, layered structures to learn abstract concepts. Their discoveries "form the building blocks of machine learning, that can aid humans in making faster and more reliable decisions", said Ellen Moons, chair of the Nobel committee, during the announcement. "Artificial neural networks have been used to advance research across physics topics as diverse as particle physics, material science and astrophysics." In 1982, Hopfield, a theoretical biologist with a background in physics, came up with a network that described connections between nodes as physical forces. By storing patterns as a low-energy state of the network, the system could recreate the image when prompted with a similar pattern. It became known as associate memory, because of its similarity to the brain trying to remember a rarely-used word or concept. Hinton, a computer scientist, later used principles from statistical physics, which is used to collectively describe systems made up of too many parts to track individually, to further develop the 'Hopfield network'. By building probabilities into a layered version of the network, he created a tool that could recognise and classify images, or generate new examples of the type it was trained on. These processes differed from computation that came before it, as the networks were able to learn from examples, including from unstructured data, that would have been challenging for conventional software based on step-by-step calculations. Neural networks today underpin tools from large language models (LLMs) as well as processing and analysing large swathes of data across the sciences. Speaking by telephone at the announcement, Hinton said that learning he had won the Nobel was "a bolt from the blue". "I'm flabbergasted, I had no idea this would happen," he said. He added that advances in machine learning "will have a huge influence, it will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability".
[9]
Physics Nobel prize goes to artificial neural networks and ...
The 2024 Nobel prize in physics has been jointly awarded to John Hopfield and Geoffery Hinton 'for foundational discoveries and inventions that enable machine learning with artificial neural networks'. Hopfield from Princeton University, US and Hinton from the University of Toronto, Canada were praised by the Nobel committee for 'using fundamental concepts and methods from physics', to develop technologies with 'the greatest benefit to humankind', said the chair of the Nobel committee, Ellen Moon. Since their creation in the 1980s, artificial neural networks (ANNs) have evolved into a cornerstone of modern technology, powering everything from smartphone applications to cutting-edge scientific research. Inspired by neurons in the brain, ANNs operate through networks of artificial 'neurons' or nodes connected by 'synapses' that can be trained to complete specific tasks instead of simply following preset instructions. In the 1970s, Hopfield, a pioneer in biological physics, was investigating electron transfers between biological molecules. He wanted to develop computational tools to probe more complicated biological systems which led him to consider the dynamics of simple neural networks that would be able to identify features and phenomena resulting from the interaction of isolated components. In 1982, Hopfield published a study on a simple neural network with associative memory that could store and reconstruct patterns in data. Hinton, who specialised in experimental psychology and artificial intelligence, built on this work by employing concepts from statistical physics, the science of systems built from many similar entities. This resulted in the invention of a new network that learns by example by analysing large datasets, which allows it to recognise patterns within the data. This 'training' process helps the network predict events that are highly likely to happen, such as autonomously identifying specific elements in images, thereby advancing the development of large deep learning ANNs now in use. Hinton, known as 'the godfather of artificial intelligence', said on the phone to the Nobel prize press conference that he was 'flabbergasted' to receive the prize. When asked by the audience about the significance of the technology his research helped usher in, he said, 'it will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us.' Today, AI is a transformative force, from performing complex data analysis and applications in materials science, to facial recognition and medical diagnostics. Among its most sophisticated applications is AlphaFold, a tool for predicting protein structures that exemplifies the capabilities of modern deep learning ANNs. However, despite the revolutionary potential of ANNs, 'we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control', cautions Hinton.
[10]
How a subfield of physics led to breakthroughs in AI - and from there to this year's Nobel Prize
University of Michigan provides funding as a founding partner of The Conversation US. John J. Hopfield and Geoffrey E. Hinton received the Nobel Prize in physics on Oct. 8, 2024, for their research on machine learning algorithms and neural networks that help computers learn. Their work has been fundamental in developing neural network theories that underpin generative artificial intelligence. A neural network is a computational model consisting of layers of interconnected neurons. Like the neurons in your brain, these neurons process and send along a piece of information. Each neural layer receives a piece of data, processes it and passes the result to the next layer. By the end of the sequence, the network has processed and refined the data into something more useful. While it might seem surprising that Hopfield and Hinton received the physics prize for their contributions to neural networks, used in computer science, their work is deeply rooted in the principles of physics, particularly a subfield called statistical mechanics. As a computational materials scientist, I was excited to see this area of research recognized with the prize. Hopfield and Hinton's work has allowed my colleagues and me to study a process called generative learning for materials sciences, a method that is behind many popular technologies like ChatGPT. What is statistical mechanics? Statistical mechanics is a branch of physics that uses statistical methods to explain the behavior of systems made up of a large number of particles. Instead of focusing on individual particles, researchers using statistical mechanics look at the collective behavior of many particles. Seeing how they all act together helps researchers understand the system's large-scale macroscopic properties like temperature, pressure and magnetization. For example, physicist Ernst Ising developed a statistical mechanics model for magnetism in the 1920s. Ising imagined magnetism as the collective behavior of atomic spins interacting with their neighbors. In Ising's model, there are higher and lower energy states for the system, and the material is more likely to exist in the lowest energy state. One key idea in statistical mechanics is the Boltzmann distribution, which quantifies how likely a given state is. This distribution describes the probability of a system being in a particular state - like solid, liquid or gas - based on its energy and temperature. Ising exactly predicted the phase transition of a magnet using the Boltzmann distribution. He figured out the temperature at which the material changed from being magnetic to nonmagnetic. Phase changes happen at predictable temperatures. Ice melts to water at a specific temperature because the Boltzmann distribution predicts that when it gets warm, the water molecules are more likely to take on a disordered - or liquid - state. In materials, atoms arrange themselves into specific crystal structures that use the lowest amount of energy. When it's cold, water molecules freeze into ice crystals with low energy states. Similarly, in biology, proteins fold into low energy shapes, which allow them to function as specific antibodies - like a lock and key - targeting a virus. Neural networks and statistical mechanics Fundamentally, all neural networks work on a similar principle - to minimize energy. Neural networks use this principle to solve computing problems. For example, imagine an image made up of pixels where you only can see a part of the picture. Some pixels are visible, while the rest are hidden. To determine what the image is, you consider all possible ways the hidden pixels could fit together with the visible pieces. From there, you would choose from among what statistical mechanics would say are the most likely states out of all the possible options. Hopfield and Hinton developed a theory for neural networks based on the idea of statistical mechanics. Just like Ising before them, who modeled the collective interaction of atomic spins to solve the photo problem with a neural network, Hopfield and Hinton imagined collective interactions of pixels. They represented these pixels as neurons. Just as in statistical physics, the energy of an image refers to how likely a particular configuration of pixels is. A Hopfield network would solve this problem by finding the lowest energy arrangements of hidden pixels. However, unlike in statistical mechanics - where the energy is determined by known atomic interactions - neural networks learn these energies from data. Hinton popularized the development of a technique called backpropagation. This technique helps the model figure out the interaction energies between these neurons, and this algorithm underpins much of modern AI learning. The Boltzmann machine Building upon Hopfield's work, Hinton imagined another neural network, called the Boltzmann machine. It consists of visible neurons, which we can observe, and hidden neurons, which help the network learn complex patterns. In a Boltzmann machine, you can determine the probability that the picture looks a certain way. To figure out this probability, you can sum up all the possible states the hidden pixels could be in. This gives you the total probability of the visible pixels being in a specific arrangement. My group has worked on implementing Boltzmann machines in quantum computers for generative learning. In generative learning, the network learns to generate new data samples that resemble the data the researchers fed the network to train it. For example, it might generate new images of handwritten numbers after being trained on similar images. The network can generate these by sampling from the learned probability distribution. Generative learning underpins modern AI - it's what allows the generation of AI art, videos and text. Hopfield and Hinton have significantly influenced AI research by leveraging tools from statistical physics. Their work draws parallels between how nature determines the physical states of a material and how neural networks predict the likelihood of solutions to complex computer science problems.
[11]
AI pioneers Geoffrey Hinton and John Hopfield win Nobel Prize for Physics
For the first time, a Nobel prize has been awarded to pioneers in the field of artificial intelligence. Geoffrey Hinton and John Hopfield were on Tuesday granted the honor of becoming Nobel Laureates in Physics, for their work on artificial neural networks over the last four decades. They will share the award of 11 million Swedish kronor ($1.06 million) that comes with the prize. Hinton, 76, is by far the better-known of the two. Sometimes referred to as one of the "godfathers of AI" -- along with Yoshua Bengio and Yann LeCun -- Hinton dramatically quit Google last year, publicly warning of the near- and long-term risks of the technology he had helped to create. He said he regretted his life's work, as AI is too easily misused; he believes it could increase inequality and perhaps even end up subjugating humanity. Now he's won the ultimate accolade for that work -- one that is even more prestigious than the Turing Award that Hinton, Bengio and LeCun shared in 2019. Neither Hopfield nor Hinton were the first people to develop artificial neural networks or suggest that they might be a way to develop artificial intelligence. But Hopfield, 91, helped lay the foundations for today's AI with a 1982 paper describing a brain-inspired network that could store and recall patterns, with the ability to find the closest match to even partial inputs. A few years later, Hinton and two other researchers (David Ackley and Terry Sejnowski) used the Hopfield network as the basis for their invention of the so-called Boltzmann machine -- another network model architecture that can classify images and iterate on its training material, though the Boltzmann machine proved to be nowhere near as scalable as today's machine-learning systems. Hinton is perhaps better known today, however, for his work, along with David Rumelhart and Ronald Williams, on "gradient descent," which is a method that allows large, multi-layer neural networks to learn efficiently. There is no Nobel prize category that clearly maps to the burgeoning AI sector -- or indeed to computer science at all. Some have previously suggested that an AI scientist could win a Nobel through the application of their technology in a more established field such as chemistry or physics. But, in this case, the Royal Swedish Academy of Sciences chose to play up both the physics-related origins and applications of Hinton and Hopfield's work. "The laureates' work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties," said Ellen Moons, the chair of the Nobel physics committee. It is certainly true that Hopfield and Hinton's work drew on the field of statistical physics, along with others such as neurobiology and cognitive psychology. Hopfield was awarded the Boltzmann Medal for statistical physics just two years ago. Nonetheless, some in the physics community are irked to see their Nobel prize go to machine-learning pioneers who were not working explicitly on fundamental physics research. "Don't want to minimize their achievements, but the link to physics is [tenuous] at best," reads the most upvoted comment on the development in the physics subreddit. "There is already too much proper physics that still has to be rewarded over this."
[12]
'It will be comparable with the industrial revolution': Two legendary AI scientists win Nobel Prize in physics for work on neural networks
The Nobel Committe for Physics announces John Hopfield and Geoffrey Hinton as the winners of the 2024 Nobel Prize for Physics. (Image credit: Johnathan Nackstrand/AP via Getty Images.) The 2024 Nobel Prize in physics has been awarded to two scientists who laid the foundations for today's rapid advancements in artificial intelligence (AI). John Hopfield and Geoffrey Hinton will share the 11 million Swedish krona ($1.03 million) prize for their work on artificial neural networks and the algorithms that enable machines to learn, the Royal Swedish Academy of Sciences, which selects the Nobel laureates in physics, announced Tuesday (Oct. 8). "I'm flabbergasted, I had no idea this would happen, I'm very surprised," Hinton said by phone at a news conference. He was speaking from a hotel in California with poor internet and a bad phone connection. "I was going to get an MRI scan today, but I think I'll have to cancel that." Hopfield, a professor in life science at Princeton University, was recognized for creating an associative memory network -- which he first proposed as Hopfield network in 1982 -- that can save and reconstruct images and other patterns from imperfect data. Hinton, a computer scientist at the University of Toronto, used Hopfield's network in the early 2000s as the foundation for a method known as the "Boltzmann machine." Using tools from statistical physics, Hinton's produced neural networks that can spot patterns in data, enabling them to classify images or create new examples of the patterns it was trained on. Related: Humanity faces a 'catastrophic' future if we don't regulate AI, 'Godfather of AI' Yoshua Bengio says Taken together, the two advances were fundamental to the development of machine learning, which has since produced an explosion in new AI technologies and applications. Sign up for the Live Science daily newsletter now Get the world's most fascinating discoveries delivered straight to your inbox. Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. "The laureates' work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties," Ellen Moons, the chair of the Nobel Committee for Physics, said in a statement. RELATED STORIES -- 'Master of deception': Current AI models already have the capacity to expertly manipulate and deceive humans -- AI singularity may come in 2027 with artificial 'super intelligence' sooner than we think, says top scientist -- Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study Commenting on the implications of his technology at the news conference, Hinton said that machine learning will "have a huge influence, it will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability." The researchers' work represented a shift in AI research away from symbolic logic -- which attempted to replicate features of human intelligence using symbols embedded inside logic systems -- to deep learning networks. The latter uses layers of artificial neurons and vast quantities of data to loosely emulate processes in the human brain. Deep learning has been around since the 1980s, but enormous energy, data, and computational requirements kept the technology in a nascent stage until 10 years ago, when computing advances sped it up. "We have no experience of what it's like to have things smarter than us. It's going to be wonderful in many respects," he added, citing benefits to healthcare and improvements to productivity. "But we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control."
[13]
Hopfield and Hinton win Nobel Prize in Physics for AI breakthroughs
Their work enables advancements in artificial neural networks today The Royal Swedish Academy of Sciences has awarded the 2024 Nobel Prize in Physics to John J. Hopfield, Princeton University, USA, and Geoffrey E. Hinton, University of Toronto, Canada. Both laureates are recognised for their pioneering work in machine learning, specifically using artificial neural networks. Their research, drawing on principles of physics, forms the foundation of modern machine learning systems. Hopfield developed an associative memory system capable of storing and reconstructing data patterns, while Hinton introduced methods that allow networks to autonomously discover data properties and perform tasks such as image recognition. Artificial neural networks are computational systems modelled on the brain's neurons. These neurons, represented as nodes, influence each other through connections similar to synapses, adjusting their strength based on training. This year's laureates have been instrumental in shaping the use of these networks in machine learning since the 1980s. Their contributions laid the groundwork for today's advanced AI technologies. John J. Hopfield's significant contribution was his invention of a network capable of saving and reconstructing patterns. By applying principles from physics, particularly atomic spin, his network is designed to function by minimising energy, much like systems in nature. The network updates its nodes to progressively reveal a stored image when presented with an incomplete or distorted one. Geoffrey E. Hinton expanded upon Hopfield's work by developing the Boltzmann machine, a neural network that can identify features in data. Using statistical physics, Hinton's invention enables the network to learn by analysing common examples, allowing it to recognise and generate patterns. His research has been crucial to the rapid advancement of machine learning. The prize of 11 million Swedish kronor will be equally shared between the laureates
[14]
How a subfield of physics led to breakthroughs in AI, and from there to this year's Nobel Prize
John J. Hopfield and Geoffrey E. Hinton received the Nobel Prize in physics on Oct. 8, 2024, for their research on machine learning algorithms and neural networks that help computers learn. Their work has been fundamental in developing neural network theories that underpin generative artificial intelligence. A neural network is a computational model consisting of layers of interconnected neurons. Like the neurons in your brain, these neurons process and send along a piece of information. Each neural layer receives a piece of data, processes it and passes the result to the next layer. By the end of the sequence, the network has processed and refined the data into something more useful. While it might seem surprising that Hopfield and Hinton received the physics prize for their contributions to neural networks, used in computer science, their work is deeply rooted in the principles of physics, particularly a subfield called statistical mechanics. As a computational materials scientist, I was excited to see this area of research recognized with the prize. Hopfield and Hinton's work has allowed my colleagues and me to study a process called generative learning for materials sciences, a method that is behind many popular technologies like ChatGPT. What is statistical mechanics? Statistical mechanics is a branch of physics that uses statistical methods to explain the behavior of systems made up of a large number of particles. Instead of focusing on individual particles, researchers using statistical mechanics look at the collective behavior of many particles. Seeing how they all act together helps researchers understand the system's large-scale macroscopic properties like temperature, pressure and magnetization. For example, physicist Ernst Ising developed a statistical mechanics model for magnetism in the 1920s. Ising imagined magnetism as the collective behavior of atomic spins interacting with their neighbors. In Ising's model, there are higher and lower energy states for the system, and the material is more likely to exist in the lowest energy state. One key idea in statistical mechanics is the Boltzmann distribution, which quantifies how likely a given state is. This distribution describes the probability of a system being in a particular state -- like solid, liquid or gas -- based on its energy and temperature. Ising exactly predicted the phase transition of a magnet using the Boltzmann distribution. He figured out the temperature at which the material changed from being magnetic to nonmagnetic. Phase changes happen at predictable temperatures. Ice melts to water at a specific temperature because the Boltzmann distribution predicts that when it gets warm, the water molecules are more likely to take on a disordered -- or liquid -- state. In materials, atoms arrange themselves into specific crystal structures that use the lowest amount of energy. When it's cold, water molecules freeze into ice crystals with low energy states. Similarly, in biology, proteins fold into low energy shapes, which allow them to function as specific antibodies -- like a lock and key -- targeting a virus. Neural networks and statistical mechanics Fundamentally, all neural networks work on a similar principle -- to minimize energy. Neural networks use this principle to solve computing problems. For example, imagine an image made up of pixels where you only can see a part of the picture. Some pixels are visible, while the rest are hidden. To determine what the image is, you consider all possible ways the hidden pixels could fit together with the visible pieces. From there, you would choose from among what statistical mechanics would say are the most likely states out of all the possible options. Hopfield and Hinton developed a theory for neural networks based on the idea of statistical mechanics. Just like Ising before them, who modeled the collective interaction of atomic spins to solve the photo problem with a neural network, Hopfield and Hinton imagined collective interactions of pixels. They represented these pixels as neurons. Just as in statistical physics, the energy of an image refers to how likely a particular configuration of pixels is. A Hopfield network would solve this problem by finding the lowest energy arrangements of hidden pixels. However, unlike in statistical mechanics -- where the energy is determined by known atomic interactions -- neural networks learn these energies from data. Hinton popularized the development of a technique called backpropagation. This technique helps the model figure out the interaction energies between these neurons, and this algorithm underpins much of modern AI learning. The Boltzmann machine Building upon Hopfield's work, Hinton imagined another neural network, called the Boltzmann machine. It consists of visible neurons, which we can observe, and hidden neurons, which help the network learn complex patterns. In a Boltzmann machine, you can determine the probability that the picture looks a certain way. To figure out this probability, you can sum up all the possible states the hidden pixels could be in. This gives you the total probability of the visible pixels being in a specific arrangement. My group has worked on implementing Boltzmann machines in quantum computers for generative learning. In generative learning, the network learns to generate new data samples that resemble the data the researchers fed the network to train it. For example, it might generate new images of handwritten numbers after being trained on similar images. The network can generate these by sampling from the learned probability distribution. Generative learning underpins modern AI -- it's what allows the generation of AI art, videos and text. Hopfield and Hinton have significantly influenced AI research by leveraging tools from statistical physics. Their work draws parallels between how nature determines the physical states of a material and how neural networks predict the likelihood of solutions to complex computer science problems.
[15]
Geoff Hinton and John Hopfield win Nobel Prize in Physics for their work in foundational AI
The Royal Swedish Academy of Sciences has announced the Nobel Prize in Physics 2024. Geoff Hinton and John Hopfield are jointly sharing the prestigious award for their work on artificial neural networks starting back in the late 1970s and early 1980s. More specifically, Hinton and Hopfield were given the award for "foundational discoveries and inventions that enable machine learning with artificial neural networks." The news comes as AI has emerged as one of the major driving forces behind what some have dubbed the fourth industrial revolution. Major innovators in the space are being recognized for their work. Earlier this year, Google Deepmind co-founder and CEO Demis Hassabis was awarded a knighthood in the U.K. for "services to artificial intelligence." Hinton is among the world's most renowned researchers in the field of AI, laying the groundwork for much of the advances we've seen these past few years. He has often been referred to as the "godfather of deep learning." After gaining a PhD in artificial intelligence in 1978, Hinton went on to co-create the backpropagation algorithm, a method that allows neural networks to learn from their mistakes, transforming how AI models are trained. Hinton joined Google in 2013 after the search giant acquired his company DNNresearch. He quit Google last year citing his concerns over the role that AI was playing in the spread of misinformation. Today, Hinton is a professor at the University of Toronto. Hopfield, a professor at Princeton, was also an early pioneer in some of the foundational work in the realm of AI. He developed what came to be known as the Hopfield network, a type of neural network that transformed AI by demonstrating how neural networks could store and retrieve patterns. It basically mimics how the human memory works and showed how some of the principles of biology and physics could be applied to computational systems. Nobel Prize winners, also known as "laureates," receive several rewards in recognition of their work, including a gold medal, a diploma, and a cash prize of 11 million Swedish kronor ($1 million), which is split between the winners if there is more than one. And obviously, the winners gain global prestige. "The laureates' work has already been of the greatest benefit," Ellen Moons, chair of the Nobel Committee for Physics, said in a statement. "In physics, we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties."
[16]
Nobel Prize in Physics: Who are John J Hopfield and Geoffrey E Hinton?
John J. Hopfield and Geoffrey E. Hinton have been awarded the 2024 Nobel Prize for Physics for their pioneering work in neural networks and machine learning. Their discoveries laid the groundwork for today's advanced AI technologies. Hopfield is recognized for associative memory networks, while Hinton's work on Boltzmann machines has revolutionized data pattern recognition.The 2024 Nobel Prize in Physics has been awarded to John J. Hopfield and Geoffrey E. Hinton "for foundational discoveries and inventions that enable machine learning with artificial neural networks." They trained artificial neural networks using physics. This year's two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today's powerful machine learning. John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures, an official statement said. ALSO READ: Nobel Prize 2024 in Physics awarded to John J. Hopfield & Geoffrey E. Hinton Born in 1933 in Chicago, John J. Hopfield did his PhD from Cornell University in USA. Most widely known for his study of associative neural network in 1982, Hopfield was a co-founder of the Computation and Neural Systems PhD program at Caltech in 1886. He spent two years in the theory group at Bell Laboratories, and subsequently was a faculty member at University of California, Berkeley (physics), Princeton University (physics), California Institute of Technology (chemistry and biology) . John Hopfield invented a network that uses a method for saving and recreating patterns. The Hopfield network utilises physics that describes a material's characteristics due to its atomic spin - a property that makes each atom a tiny magnet. Geoffrey E. Hinton was born in 1947 in London, UK and completed his PhD from the University of Edinburgh in UK. Most noted for his work on artificial neural networks, he has also earned the title of "Godfather of AI". In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto. Geoffrey Hinton used the Hopfield network as the foundation for a new network that uses a diferent method: the Boltzmann machine. This can learn to recognise characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning. The Royal Swedish Academy of Sciences awards the Nobel Prizes in Physics and Chemistry. Last year, the Nobel Prize in Physics was awarded to Pierre Agostini, Ferenc Krausz, and Anne L'Huillier "for experimental methods that generate attosecond pulses of light for the study of electro dynamics in matter." The prizes, which will be announced from October 7 to 14, come with an award of 11 million Swedish crowns. The Nobel Prize was established by Swedish inventor Alfred Nobel, who designated the institutions responsible for the awards in his will.
[17]
AI pioneers John Hopfield and Geoffrey Hinton won Nobel Physics Prize 2024
John Hopfield and Geoffrey Hinton have been awarded the 2024 Nobel Prize in Physics for their pioneering work that laid the foundation for today's AI revolution. The prestigious recognition shines a spotlight on the profound scientific contributions of these two visionaries, whose work has transformed both our understanding of artificial intelligence and its impact on everyday life. John Hopfield, a U.S. scientist and professor at Princeton University, is celebrated for his creation of the associative memory model in 1982. This neural network model mimics the way human brains store and reconstruct information, a concept that is now fundamental to how AI systems process data and images. His work bridged the fields of physics, neuroscience, and computation, making it possible for machines to "learn" by storing patterns and recalling them in ways that resemble human memory. Meanwhile, Geoffrey Hinton, a British-Canadian computer scientist, is widely regarded as the "godfather of AI" for his breakthroughs in deep learning. Hinton's work on neural networks enabled computers to recognize patterns and learn from vast amounts of data, an innovation that is now core to applications ranging from speech recognition to image processing. Hinton's creation of the backpropagation algorithm revolutionized how AI systems learn, and his contributions are deeply embedded in the technologies that power platforms like Google's search algorithms, autonomous vehicles, and even healthcare diagnostics. This Nobel Prize comes at a pivotal time in Hinton's career. In 2023, he made headlines by leaving his role at Google to more freely discuss the potential dangers posed by AI. He has expressed deep concern that artificial intelligence could eventually surpass human intelligence, with unforeseen consequences. "We have no experience of what it's like to have things smarter than us," Hinton remarked during the Nobel press conference. While Hinton acknowledges the transformative potential of AI in fields like healthcare, he also warns of the need to prepare for its risks -- especially the possibility of AI systems spiraling out of human control. Hopfield shares a similar sense of caution, reflecting on the dual nature of technological advancement. "One is accustomed to having technologies which are not singularly only good or only bad," he commented, hinting at the balance society must strike between innovation and responsibility. Both laureates have dedicated their careers to answering one of humanity's most profound questions: how can machines exhibit intelligence, and what are the implications of building systems that can potentially outthink their creators? Their contributions have not only laid the groundwork for machine learning but have also spurred wider discussions about ethics, responsibility, and the future role of AI in society. The Nobel Committee recognized their achievements as a revolution in science and engineering, noting that their innovations are "changing daily life" for people around the globe. The 11 million Swedish crowns ($1.1 million) prize will be shared by Hopfield and Hinton, acknowledging both the technical breakthroughs and the visionary impact of their work. This award also makes the history of the Nobel Prize, which has traditionally honored physicists for discoveries in atomic theory, quantum mechanics, and cosmology. Awarding the prize to pioneers in AI reflects the increasing intersection of physics, computation, and neuroscience -- fields once considered distinct but now deeply intertwined thanks to the efforts of figures like Hopfield and Hinton. As the future of AI unfolds, the work of these two laureates will continue to shape the direction of the field. Whether it's unlocking new scientific discoveries, creating smarter healthcare systems, or raising important ethical questions, their legacy will influence how we think about intelligence -- both human and machine -- for generations to come. With the Nobel Prize in hand, Hopfield and Hinton have not only secured their place in history but have also ignited further dialogue about the promises and perils of the AI-driven world they helped create. The question remains: Can humanity guide this powerful technology to ensure it benefits society, or will it become a force that we no longer control? Only time -- and continued innovation -- will tell.
[18]
Godfather of AI Geoffrey Hinton and John J Hopfield Win Nobel Prize in Physics
"I was going to have an MRI scan today but I'll have to cancel that!", says Nobel laureate Geoffrey Hinton Geoffrey E Hinton and John J Hopfield were awarded the Nobel Physics Prize 2024 in Physics by The Royal Swedish Academy of Sciences for pioneering advancements that form the basis of today's ML world. Their work used principles of physics to develop neural networks. Hinton, often referred to as the 'Godfather of AI,' achieved the award on the basis of developing the 'Boltzmann machine', a neural network model inspired by statistical physics. This machine allows neural networks to self-learn patterns from data by modelling systems with interacting nodes, mimicking how the brain processes and categorises information. This technique is foundational for technologies like image and speech recognition. Hopfield grabbed the Nobel by approaching associative memory and pattern recognition, which involved creating the 'Hopfield network,' that applies physics concepts to create a system where the energy configuration is optimised to recreate stored images or data. This has influenced numerous applications in AI. The network is described similarly to the energy of a spin system in physics, and it is trained by adjusting the connections between nodes to minimise the energy of stored images. When given a distorted or incomplete image, the Hopfield network systematically updates the node values to reduce the network's overall energy. Through this step-by-step process, it retrieves the stored image that most closely resembles the imperfect input. "The laureates' work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties," said Ellen Moons, chair of the Nobel Committee for Physics. Hinton's most cited paper, 'Image Net Classification with Deep Convolutional Neural Networks' discusses the use of Convolutional Neural Networks (CNNs) for image classification, specifically on the ImageNet dataset which contains over 15 million labelled high-resolution images. The authors trained one of the largest CNNs to date on subsets of the ImageNet dataset used in the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC). Hinton has previously introduced 'Forward Forward' (FF) algorithm as an alternative to traditional backpropagation for training neural networks. Unlike backpropagation, which adjusts weights in a backward pass, the FF algorithm uses two forward passes to manage weights, one increasing goodness for correct data and one reducing it for incorrect data. This mimics the human brain's processing and allows for lighter, faster training with less computational power. Hinton believes this could lead to more efficient, brain-like AI systems.
[19]
In stunning Nobel win, AI researchers Hopfield and Hinton take 2024 Physics Prize
On Tuesday, the Royal Swedish Academy of Sciences awarded the 2024 Nobel Prize in Physics to John J. Hopfield of Princeton University and Geoffrey E. Hinton of the University of Toronto for their foundational work in machine learning with artificial neural networks. Hinton notably captured headlines in 2023 for warning about the threat that AI superintelligence may pose to humanity. The win came as a surprise to many, including Hinton himself. "I'm flabbergasted. I had no idea this would happen. I'm very surprised," said Hinton in a telephone call with members of the Royal Swedish Academy of Sciences during a live announcement press conference streamed to YouTube that took place this morning. Hopfield and Hinton's research, which dates back to the early 1980s, applied principles from physics to develop methods that underpin modern machine learning techniques. Their work has enabled computers to perform tasks such as image recognition and pattern completion, capabilities that are now ubiquitous in everyday technology. The win is perhaps made more eye-catching because Hinton, who is often called one of the "godfathers of AI," resigned from Google in May 2023 so he could "speak freely" about potential risks from AI systems. At the time, Hinton said that the tech industry's drive to develop AI products could result in dangerous consequences, such as a threat to humanity. "Look at how it was five years ago and how it is now," Hinton told The New York Times last year. "Take the difference and propagate it forwards. That's scary." Since then, Hinton has continued warning about the potential dangers of AI systems that may become more intelligent than humans.
[20]
Why the Nobel Prize in Physics for AI is a Game-Changer
Perhaps the most important consequence of the Nobel Prize in Physics to AI pioneers is that it validates AI as a science. Designing artificial neural networks was not an innovation based purely on technology, rather, it had such basic principles with a basis in physics. Through such an award, the Nobel Committee is saluting the role physics takes in the understanding and modeling of complex AI systems. The work by Hopfield, providing evidence on how neural networks can memorize and reproduce the memorized information has been revolutionary. In this regard, statistical mechanics provided a foundation from which AI could emulate the brain in its processing and storage abilities. Much in the same way, Hinton's backpropagation algorithm, which changed the training of deep neural networks, depends on mathematical optimization techniques that share many connections with physics. By bestowing a sense of recognition upon AI in this category, the Nobel Committee raises AI to the status of a scientific breakthrough rather than an instrument of technology. This confers such recognition to AI as scientifically relevant that it is not built upon some science of the computer but has its very principles interwoven deeply into physics as well.
[21]
AI, Machine Learning Discoveries Awarded With Physics Nobel
(Bloomberg) -- Two scientists were awarded the Nobel Prize in physics for training artificial neural networks and laying the foundations for today's machine learning applications. John J. Hopfield and Geoffrey E. Hinton will share the 11 million-krona ($1.1 million) award, the Royal Swedish Academy of Sciences in Stockholm said in a statement Tuesday. Their work began in the 1980s, setting the stage for the current boom in artificial intelligence that was enabled by an explosion of computing power and massive troves of training data. Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data, the Royal Swedish Academy of Sciences said. Hinton's contribution was inventing a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures. "I'm flabbergasted, I had no idea this would happen. I am very surprised," Hinton, 76, told journalists gathered in Stockholm by phone. Hinton, who was born in London, is affiliated with University of Toronto, Canada, while Chicago-born Hopfield, 91, is associated with Princeton University. Among the most famous physics laureates include Albert Einstein in 1921 for services to theoretical physics and Marie Curie, together with her husband Pierre, for research on radiation in 1903. Annual prizes for achievements in physics, chemistry, medicine, literature and peace were established in the will of Alfred Nobel, the Swedish inventor of dynamite, who died in 1896. A prize in economic sciences was added by Sweden's central bank in 1968. The laureates are announced through Oct. 14 in Stockholm, with the exception of the peace prize, whose recipients are selected by the Norwegian Nobel Committee in Oslo.
[22]
AI pioneers John Hopfield and Geoffrey Hinton win Nobel Prize in physics - SiliconANGLE
AI pioneers John Hopfield and Geoffrey Hinton win Nobel Prize in physics The Royal Swedish Academy of Sciences today awarded the Nobel Prize in physics to artificial intelligence pioneers John Hopfield and Geoffrey Hinton. Hopfield is a professor emeritus of molecular biology at Princeton University. Hinton, in turn, is a professor emeritus of computer science at the University of Toronto. They received the Nobel Prize for their "foundational discoveries and inventions that enable machine learning with artificial neural networks." The committee that issues the prize selected Hopfield for his development of an early AI model called the Hopfield network. The algorithm, which can fix distorted images, is based on concepts borrowed from the field of condensed matter physics. This is a branch of physics that focuses on the study of matter, particularly solids and liquids. Hopfield introduced the Hopfield network in a 1982 paper. Three years later, Hinton used the discovery to develop the Boltzmann machine, a groundbreaking deep learning model. The algorithm is based on not only the Hopfield network but also methods from the field of statistical physics, which uses statistical techniques to study particles. "The laureates' work has already been of the greatest benefit," said Ellen Moons, the chair of the Nobel Committee for Physics. "In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties." Following their Nobel Prize-winning discoveries in the 1980s, Hopfield and Hinton both went on to make significant discoveries in a number of other areas. In the AI ecosystem, Hinton is perhaps best known for his work on backpropagation. This is a method of training neural networks that is widely used in AI projects to this day. For his work on backpropagation and Boltzmann machines, Geoffrey Hinton won the 2018 Turing Award, the highest distinction in computer science. Hinton joined Google LLC in 2013 to support the search giant's machine learning research. He left last year, citing concerns about AI's potential risks. Besides holding a professorship at the University of Toronto, Hinton is also the chief scientific advisor of nonprofit AI lab Vector Institute. Hopfield's scientific contributions, in turn, span several different fields. He earned his physics doctorate in 1958 for a discovery related to quasiparticles, groups of particles that behave like a single particle. Hopfield's subsequent work helped advance not only physics and AI research but also other fields such as biochemistry.
[23]
Princeton physicist wins physics Nobel Prize for pioneering AI research
John J Hopfield and Geoffrey E Hinton are awarded the Nobel Prize in Physics at the Royal Swedish Academy of Sciences in Stockholm on Tuesday.Jonathan Nackstrand / AFP - Getty Images An American professor at Princeton University and a British-Canadian professor at the University of Toronto won the Nobel Prize in Physics on Tuesday for research that "formed the building blocks" of a key part of artificial intelligence. John J. Hopfield, 91, was awarded the honor alongside Geoffrey E. Hinton, 76, who left his job at Google last year so he could speak freely about his concerns over the technology. Since the 1980s, the pair have been using tools from physics to develop the foundations of what is known as "machine learning," one of the core concepts of AI widely used today. Their research "formed the building blocks of machine learning, that can aid humans in making faster and more reliable decisions," Ellen Moons, chair of the Nobel Committee for Physics, told a news conference. The use of this technology has "become part of our daily lives, for instance in facial recognition and language translation," Moons said, while warning that AI's "rapid development has also raised concerns about our future." Machine learning involves feeding computers masses of data so they can "learn" how to do all sorts of things -- from diagnosing diseases to knowing what people's favorite streaming shows are. Hopfield has been a key influential figure in this field since 1982, when he invented the "Hopfield network" -- a method widely used since. Hinton used this basis to come up with "the Boltzmann machine," which can used for tasks such as classifying images.
[24]
Neural networks, machine learning? Nobel-winning AI science explained
Paris (AFP) - The Nobel Prize in Physics was awarded to two scientists on Tuesday for discoveries that laid the groundwork for the artificial intelligence used by hugely popular tools such as ChatGPT. British-Canadian Geoffrey Hinton, known as a "godfather of AI," and US physicist John Hopfield were given the prize for "discoveries and inventions that enable machine learning with artificial neural networks," the Nobel jury said. But what are those, and what does this all mean? Here are some answers. What are neural networks and machine learning? Mark van der Wilk, an expert in machine learning at the University of Oxford, told AFP that an artificial neural network is a mathematical construct "loosely inspired" by the human brain. Our brains have a network of cells called neurons, which respond to outside stimuli -- such as things our eyes have seen or ears have heard -- by sending signals to each other. When we learn things, some connections between neurons get stronger, while others get weaker. Unlike traditional computing, which works more like reading a recipe, artificial neural networks roughly mimic this process. The biological neurons are replaced with simple calculations sometimes called "nodes" -- and the incoming stimuli they learn from is replaced by training data. The idea is that this could allow the network to learn over time -- hence the term machine learning. What did Hopfield discover? But before machines would be able to learn, another human trait was necessary: memory. Ever struggle to remember a word? Consider the goose. You might cycle through similar words -- goon, good, ghoul -- before striking upon goose. "If you are given a pattern that's not exactly the thing that you need to remember, you need to fill in the blanks," van der Wilk said. "That's how you remember a particular memory." This was the idea behind the "Hopfield network" -- also called "associative memory" -- which the physicist developed back in the early 1980s. Hopfield's contribution meant that when an artificial neural network is given something that is slightly wrong, it can cycle through previously stored patterns to find the closest match. In 1985, Hinton revealed his own contribution to the field -- or at least one of them -- called the Boltzmann machine. Named after 19th century physicist Ludwig Boltzmann, the concept introduced an element of randomness. This randomness was ultimately why today's AI-powered image generators can produce endless variations to the same prompt. Hinton also showed that the more layers a network has, "the more complex its behaviour can be". This in turn made it easier to "efficiently learn a desired behaviour," French machine learning researcher Francis Bach told AFP. What is it used for? Despite these ideas being in place, many scientists lost interest in the field in the 1990s. Machine learning required enormously powerful computers capable of handling vast amounts of information. It takes millions of images of dogs for these algorithms to be able to tell a dog from a cat. So it was not until the 2010s that a wave of breakthroughs "revolutionised everything related to image processing and natural language processing," Bach said. From reading medical scans to directing self-driving cars, forecasting the weather to creating deepfakes, the uses of AI are now too numerous to count. But is it really physics? Hinton had already won the Turing award, which is considered the Nobel for computer science. But several experts said his was a well-deserved Nobel win in the field of physics, which started science down the road that would lead to AI. French researcher Damien Querlioz pointed out that these algorithms were originally "inspired by physics, by transposing the concept of energy onto the field of computing". Van der Wilk said the first Nobel "for the methodological development of AI" acknowledged the contribution of the physics community, as well as the winners. "There is no magic happening here," van der Wilk emphasised. "Ultimately, everything in AI is multiplications and additions."
[25]
AI innovators win Nobel Prize for physics
John Hopfield and Geoffrey Hinton have won the physics Nobel Prize for formative work on artificial intelligence that has helped drive scientific advances but raised fears about the risks of abuse. The award highlights the fundamental role the AI field of machine learning now plays in research, because of the amount of data it can process at speed. Hinton, who quit Google last year so he could speak more freely, said he was "flabbergasted" by Tuesday's honour and spoke of the power and perils of AI. "It's going to be wonderful in many respects," Hinton told the award ceremony in Stockholm by telephone, citing AI-driven advances in healthcare and industrial productivity. "But we also have to worry about a number of possible bad consequences -- particularly the threat of these things getting out of control." Hopfield and Hinton won the SKr11mn ($1.06mn) prize for "foundational discoveries and inventions" in machine learning dating back to the 1980s, the Nobel Assembly said. Their work helped develop so-called artificial neural networks that mimic the biological wiring of the human brain to process information. Hinton, who is revered in technology circles as one of the "godfathers" of AI, said its impact would be historic in magnitude. "It will have a huge influence . . . comparable with the industrial revolution," he said. "But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us." Hopfield, a US physicist, devised an artificial neural network to save and recreate patterns. Hinton, a British-Canadian computer scientist, used Hopfield's research to build a new network known as the Boltzmann machine. This can be used to classify images or create new examples of pattern types it has learned. Together they helped "initiate the current explosive development of machine learning", the Nobel organisers said. Machine learning has become part of our daily lives in areas including facial recognition, language translation and medical diagnosis, said Ellen Moons, chair of the Nobel committee for physics. But the technology's scope and ever-increasing capabilities have stoked fears ranging from its use by authoritarian states to the possibility of machines one day evolving to act independently of human instructions. Moons warned that AI needed to be used carefully. "While machine learning has enormous benefits, its rapid development has also raised concerns about our future," Moons said. "Collectively, humans carry the responsibility for using this new technology in a safe and ethical way for the greatest benefits of humankind."
[26]
AI Pioneers Hopfield, Hinton Win 2024 Nobel Prize for Neural Networks | PYMNTS.com
The Royal Swedish Academy of Sciences has awarded the 2024 Nobel Prize in Physics to two pioneers whose work laid the foundation for today's artificial intelligence (AI) revolution, even as one of the recipients has become a vocal critic of the technology's potential dangers. The Academy announced Tuesday (Oct. 8) that John J. Hopfield, 91, of Princeton University, and Geoffrey E. Hinton, 77, of the University of Toronto, share the prestigious award "for foundational discoveries and inventions that enable machine learning with artificial neural networks." The two laureates' work, which dates back to the 1980s, provided crucial building blocks for modern machine learning techniques. Their innovations in training artificial neural networks -- computing systems inspired by the human brain -- have become fundamental to today's AI industry. "The laureates' work has already been of the greatest benefit," Ellen Moons, chair of the Nobel Committee for Physics, said in the announcement. "In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties." Hopfield pioneered a type of neural network that can store and reconstruct patterns in data, such as images. His approach drew on concepts from physics, particularly the behavior of atomic spins in materials. Hinton built on Hopfield's work to develop the Boltzmann machine, a more sophisticated neural network capable of autonomously discovering important features in data. This innovation proved critical for tasks like image classification and generating new examples of learned patterns. In recent years, Hinton has become an outspoken voice cautioning against the potential risks posed by advanced AI systems. In May 2023, Hinton resigned from his position at Google to speak more freely about these concerns. Since then, he has advocated for more AI regulations. "I suspect that Andrew Ng and Yann LeCun have missed the main reason why the big companies want regulations," Hinton wrote last year on X. "Years ago the founder of a self-driving company told me that he liked safety regulations because if you satisfied them it reduced your legal liability for accidents." The prize, which amounts to 11 million Swedish kronor (approximately $1 million), will be shared equally between the two laureates. This year's award highlights the interdisciplinary nature of scientific breakthroughs. "When we talk about artificial intelligence, we often mean machine learning using artificial neural networks," the committee noted, underscoring the broad impact of the laureates' work. The Nobel Prize in Physics is the second of this year's Nobel Prizes to be announced. The Medicine Prize was awarded on Monday, with the Chemistry Prize to follow on Wednesday. The Literature and Peace prizes will round out the week on Thursday and Friday, respectively.
[27]
What Geoffrey Hinton's Nobel Prize Means for the AI World
The Godfather of AI, Geoffrey Hinton, has won the 2024 Nobel Prize in Physics. Obviously, the tech world is abuzz with discussions about the AI professor known for his work in deep learning, and his work in physics. However, moving on from the massive win for Hinton and the AI/ML ecosystem, we see this also as a new beginning for the future of AI and its investments. Hinton achieved the award for developing the 'Boltzmann machine', a neural network model inspired by statistical physics. This model allows neural networks to self-learn patterns from data by modelling systems with interacting nodes, mimicking how the brain processes and categorises information. Apart from this, David Baker, an American biochemist and computational biologist, and Demis Hassabis and John M Jumper, two Google DeepMind scientists, have been awarded the Nobel Prize in Chemistry 2024 by The Royal Swedish Academy of Sciences. Baker received the award for 'computational protein design' and the other two shared jointly by Hassabis and Jumper for 'protein structure prediction.' The latter two have successfully utilised AI to predict the structure of almost all known proteins. In 2020, Hassabis and Jumper presented an AI model called AlphaFold2. These awards, when considered for the healthcare industry, are revolutionary in showcasing the impact that AI can have on the medical field. When someone asked ChatGPT if Hinton would ever be able to land a Nobel Prize in Physics, it replied that it is not possible at all since physics is not the same as AI. But it has been proven completely wrong. Traditionally, the Nobel Prize in Physics has been reserved for discoveries that illuminate the physical world. Hinton's win, however, highlights how deeply AI is now intertwined with our understanding of the universe and how neural networks -- rooted in concepts borrowed from statistical mechanics -- are driving forward both scientific inquiry and technological progress. "AI is truly mainstream," commented Debarghya Deedy Das, acknowledging that the person who wrote seminal computer science papers for ML, has won the Nobel Prize for Physics. Meanwhile, Hinton was "flabbergasted" about being awarded the prize. "I'm in a cheap hotel in California which doesn't have a good internet or phone connection. I was going to have an MRI scan today but I'll have to cancel that," Hinton told the press conference. Vishnu Vardhan, the founder of SML and creator of Hanooman AI, told AIM that the Physics Nobel for an AI scientist is coming of age for AI. "Today, AI is no longer an unknown black box but a mainstream science which is proven at the fundamental level. This is huge as AI will now be used with a lot more confidence as it is recognised as fundamental science than just another computer algorithm." Vardhan added that this will pave the way for India to use AI to solve many of its unique challenges from agricultural production and access to education and universal healthcare. "This will also bring in a lot more research funding and collaboration at the global level. A great day for the AI community to be recognised as mainstream science," he added. Subbarao Kambhampati sounded worried about the plight of the "desi CS folk". Now, with this Nobel Prize, they will have to deal with Nobel expectations as well. "Bitiya, tu bhi paa sakte ho, na? Kuch zor se kaam karo..." he posted on X. "This Nobel prize is a beacon of inspiration for everyone in the field of AI," Ankush Sabharwal, CEO of CoRover, told AIM. "Their efforts have opened the door for AI systems that help people make judgments more quickly and accurately. To ensure that these technologies are really human-centric, it is more important than ever to develop AI that understands human emotions and behaviour," he added. Hinton has been very vocal about the dangers of AI. He compares the potential risks of AI to the creation of the atomic bomb during World War II, emphasising the dangers of profit-driven AI development that could result in AI-generated content surpassing human-produced content and jeopardising our survival. After winning the award, Hinton also said that he is proud that Ilya Sutskever, one of Hinton's students, fired Sam Altman from OpenAI as Altman is not concerned about AI safety at all. Because of this, some people have even claimed that Hinton got the award mostly because he has been siding with the people who promote 'AI doomerism'. On the other hand, some even call it the promotion of ethical AI, as Hinton proposes a slow and responsible approach to building AI. "A large language model has a trillion weights. You have 100 trillion weights. Even if you use 10% of that, you have 10 trillion weights," said Hinton. He adds that an LLM in its trillion weights knows thousands of times more than we do. "It's got much more knowledge and that's partly because it has seen much more data," he explained in his lecture at Oxford University, saying that it might also be because it has much better learning algorithms, something humans will never be able to achieve. "You have got crazily more parameters than you have got experiences. Our brain is optimised for not having many experiences."
[28]
Neural networks, machine learning? Nobel-winning AI science explained
The Nobel Prize in Physics was awarded to two scientists on Tuesday for discoveries that laid the groundwork for the artificial intelligence used by hugely popular tools such as ChatGPT. British-Canadian Geoffrey Hinton, known as a "godfather of AI," and US physicist John Hopfield were given the prize for "discoveries and inventions that enable machine learning with artificial neural networks," the Nobel jury said. But what are those, and what does this all mean? Here are some answers. What are neural networks and machine learning? Mark van der Wilk, an expert in machine learning at the University of Oxford, told AFP that an artificial neural network is a mathematical construct "loosely inspired" by the human brain. Our brains have a network of cells called neurons, which respond to outside stimuli -- such as things our eyes have seen or ears have heard -- by sending signals to each other. When we learn things, some connections between neurons get stronger, while others get weaker. Unlike traditional computing, which works more like reading a recipe, artificial neural networks roughly mimic this process. The biological neurons are replaced with simple calculations sometimes called "nodes" -- and the incoming stimuli they learn from is replaced by training data. The idea is that this could allow the network to learn over time -- hence the term machine learning. What did Hopfield discover? But before machines would be able to learn, another human trait was necessary: memory. Ever struggle to remember a word? Consider the goose. You might cycle through similar words -- goon, good, ghoul -- before striking upon goose. "If you are given a pattern that's not exactly the thing that you need to remember, you need to fill in the blanks," van der Wilk said. "That's how you remember a particular memory." This was the idea behind the "Hopfield network" -- also called "associative memory" -- which the physicist developed back in the early 1980s. Hopfield's contribution meant that when an artificial neural network is given something that is slightly wrong, it can cycle through previously stored patterns to find the closest match. In 1985, Hinton revealed his own contribution to the field -- or at least one of them -- called the Boltzmann machine. Named after 19th century physicist Ludwig Boltzmann, the concept introduced an element of randomness. This randomness was ultimately why today's AI-powered image generators can produce endless variations to the same prompt. Hinton also showed that the more layers a network has, "the more complex its behavior can be". This in turn made it easier to "efficiently learn a desired behavior," French machine learning researcher Francis Bach told AFP. What is it used for? Despite these ideas being in place, many scientists lost interest in the field in the 1990s. Machine learning required enormously powerful computers capable of handling vast amounts of information. It takes millions of images of dogs for these algorithms to be able to tell a dog from a cat. So it was not until the 2010s that a wave of breakthroughs "revolutionized everything related to image processing and natural language processing," Bach said. From reading medical scans to directing self-driving cars, forecasting the weather to creating deepfakes, the uses of AI are now too numerous to count. But is it really physics? Hinton had already won the Turing award, which is considered the Nobel for computer science. But several experts said his was a well-deserved Nobel win in the field of physics, which started science down the road that would lead to AI. French researcher Damien Querlioz pointed out that these algorithms were originally "inspired by physics, by transposing the concept of energy onto the field of computing". Van der Wilk said the first Nobel "for the methodological development of AI" acknowledged the contribution of the physics community, as well as the winners. And while ChatGPT can sometimes make AI seem genuinely creative, it is important to remember the "machine" part of machine learning. "There is no magic happening here," van der Wilk emphasized. "Ultimately, everything in AI is multiplications and additions."
[29]
AI pioneer Geoffrey Hinton, who warned of X-risk, wins Nobel Prize in Physics
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Geoffrey E. Hinton, a leading artificial intelligence researcher and professor emeritus at the University of Toronto, has been awarded the 2024 Nobel Prize in Physics alongside John J. Hopfield of Princeton University. The Royal Swedish Academy of Sciences has awarded both men the prize of 11 million Swedish kronor (approximately $1.06 million USD), to be shared equally between the laureates Hinton has been nicknamed by various outlets and fellow researchers as the "Godfather of AI" due to his revolutionary work in artificial neural networks, a foundational technology underpinning modern artificial intelligence. Despite the recognition, Hinton has grown increasingly cautious about the future of AI. In 2023, he left his role then at Google's DeepMind unit to speak more freely about the potential dangers posed by uncontrolled AI development. Hinton has warned that rapid advancements in AI could lead to unintended and harmful consequences, including misinformation, job displacement, and even existential threats -- including human extinction, or so-called "x-risk." He has expressed concern that the very technology he helped create may eventually surpass human intelligence in unpredictable ways, a scenario he finds particularly troubling. As MIT Tech Review reported after interviewing him in May 2023, Hinton was particularly concerned about bad actors, such as authoritarian leaders, who could use AI to manipulate elections, wage wars, or carry out immoral objectives. He expressed concern that AI systems, when tasked with achieving goals, may develop dangerous subgoals, like monopolizing energy resources or self-replication. While Hinton did not sign the high-profile letters calling for a moratorium on AI development, his departure from Google signaled a pivotal moment for the tech industry. Hinton believes that, without global regulation, AI systems could become uncontrollable, a sentiment echoed by many within the field. His vision for AI is now shaped by both its immense potential and the looming risks it carries. Even reflecting on his work today after winning the Nobel, Hinton told CNN that generative AI: "....will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us...we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control." What Hinton won the Nobel for Geoffrey Hinton's recognition with the Nobel Prize comes as no surprise to those familiar with his extensive contributions to artificial intelligence. Born in London in 1947, Hinton initially pursued a PhD at the University of Edinburgh, where he embraced neural networks -- an idea that was largely disregarded by most researchers at the time. In 1985, he and collaborator Terry Sejnowski created the "Boltzmann machine," an algorithm, named for Austrian physicist Ludwig Boltzmann, capable of learning to identify elements in data. Joining the University of Toronto in 1987, Hinton worked with graduate students to further advance AI. Their work became central to the development of today's machine learning systems, forming the basis for many of the applications we use today, including image recognition and natural language processing, self-driving cars, even language models like OpenAI's GPT series. In 2012, Hinton an two of his graduate students from the University of Toronto, Ilya Sutskever and Alex Krizhevsky, founded a spinoff company called DNNresearch to focus on advancing deep neural networks -- specifically "deep learning" -- which models artificial intelligence on the human brain's neural pathways to improve machine learning capabilities. Hinton and his collaborators developed a neural network capable of recognizing images (like flowers, dogs, and cars) with unprecedented accuracy, a feat that had long seemed unattainable. Their research fundamentally changed AI's approach to computer vision, showcasing the immense potential of neural networks when trained on vast amounts of data. Despite its significant achievements, DNNresearch had no products or immediate commercial ambitions when it was founded. Instead, it was formed as a mechanism for Hinton and his students to more effectively navigate the growing interest in their work from major tech companies, which would eventually lead to the auction that sparked the modern race for AI dominance. In fact, they put the company up for auction in December 2012 and received a competitive bidding war between Google, Microsoft, Baidu, and DeepMind, as recounted in an amazing Wired magazine article by Cade Metz from 2021. Hinton eventually chose to sell to Google for $44 million, even though he could have driven the price higher. This auction marked the beginning of an AI arms race between tech giants, driving rapid advancements in deep learning and AI technology. This background is critical to understanding Hinton's impact on AI and how his innovations contributed to his being awarded the Nobel Prize in Physics today, reflecting the foundational importance of his work in neural networks and machine learning to the evolution of modern AI. U of T President Meric Gertler congratulated Hinton on his accomplishment, highlighting the university's pride in his historic achievement . Hinton is widely credited for advancing neural networks through the development of the Boltzmann machine, a model that can classify data and generate new patterns from training examples. Hopfield's legacy John J. Hopfield, a professor at Princeton University who shares the Nobel Prize with Hinton, developed an associative memory model, known as the Hopfield network, which revolutionized how patterns, including images, can be stored and reconstructed. This model applies principles from physics, specifically atomic spin systems, to neural networks, enabling them to work through incomplete or distorted data to restore full patterns, and is similar to how diffusion models powering image and video AI services can learn to create new images from training on reconstructing old ones. His contributions have not only influenced AI but have also impacted computational neuroscience and error correction, showcasing the interdisciplinary relevance of his work. His work, closely related to atomic spin systems, paved the way for further advancements in AI, including Hinton's Boltzmann machine. While Hinton's work catapulted neural networks into the modern era, Hopfield's earlier breakthroughs laid a crucial foundation for pattern recognition in neural models. Both laureates' achievements have significantly influenced the rapid growth of AI, leading to transformative changes in industries ranging from technology to healthcare. The Nobel Committee emphasized that their work in artificial neural networks has already benefited a wide range of fields, particularly in material science and beyond.
[30]
Pioneers in Neural Networks Win 2024 Nobel Prize in Physics - Neuroscience News
Summary: Professors John J. Hopfield and Geoffrey E. Hinton have been awarded the 2024 Nobel Prize in Physics for their groundbreaking work on artificial neural networks, which laid the foundation for modern machine learning. Hopfield invented a network that recalls saved images by adjusting its "energy" based on physics principles, while Hinton expanded this model to create the Boltzmann machine, enabling it to classify and generate complex patterns. Their work has catalyzed advancements in AI, shaping tools that learn from data and recognize patterns. These contributions have profoundly influenced both neuroscience and computational science. Professor John J. Hopfield at Princeton University and Professor Geoffrey E. Hinton at the University of Toronto, Canada, were awarded the 2024 Nobel Prize in Physics for their foundational discoveries and inventions that enable machine learning with artificial neural networks. Hopfield was the recipient of the 2012 SfN Swartz Prize for Theoretical and Computational Neuroscience. John Hopfield invented a network that uses a method for saving and recreating patterns. We can imagine the nodes as pixels. The Hopfield network utilizes physics that describes a material's characteristics due to its atomic spin - a property that makes each atom a tiny magnet. The network as a whole is described in a manner equivalent to the energy in the spin system found in physics, and is trained by finding values for the connections between the nodes so that the saved images have low energy. When the Hopfield network is fed a distorted or incomplete image, it methodically works through the nodes and updates their values so the network's energy falls. The network thus works stepwise to find the saved image that is most like the imperfect one it was fed with. Geoffrey Hinton used the Hopfield network as the foundation for a new network that uses a different method: the Boltzmann machine. This can learn to recognize characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning. The Society for Neuroscience honored Hopfield in 2012 with the Swartz Prize for his impact on neuroscience and his creation of a new framework for understanding how neurons interact to create learning and memory. The Swartz Prize for Theoretical and Computational Neuroscience is given to an individual whose activities have produced a significant cumulative contribution to theoretical models or computational methods in neuroscience or who has made a particularly noteworthy recent advance in theoretical or computational neuroscience. The prize is endowed by the Swartz Foundation. The Royal Swedish Academy of Sciences, founded in 1739, is an independent organization whose overall objective is to promote the sciences and strengthen their influence in society. The Academy takes special responsibility for the natural sciences and mathematics, but endeavors to promote the exchange of ideas between various disciplines. The Nobel Prize amount for 2024 is set at Swedish kronor (SEK) 11.0 million (~$1.1 million) and will be split equally between the laureates.
[31]
Pioneers in artificial intelligence win the Nobel Prize in physics
Two pioneers of artificial intelligence -- John Hopfield and Geoffrey Hinton -- won the Nobel Prize in physics Tuesday for helping create the building blocks of machine learning that is revolutionizing the way we work and live but also creates new threats to humanity, one of the winners said. Hinton, who is known as the "godfather of artificial intelligence," is a citizen of Canada and Britain who works at the University of Toronto. Hopfield is an American working at Princeton. "This year's two Nobel Laureates in physics have used tools from physics to develop methods that are the foundation of today's powerful machine learning," the Nobel committee said in a press release. Ellen Moons, a member of the Nobel committee at the Royal Swedish Academy of Sciences, said the two laureates "used fundamental concepts from statistical physics to design artificial neural networks that function as associative memories and find patterns in large data sets." She said that such networks have been used to advance research in physics and "have also become part of our daily lives, for instance in facial recognition and language translation." Hinton predicted that AI will end up having a "huge influence" on civilization, bringing improvements in productivity and health care. "It would be comparable with the Industrial Revolution," he said in the open call with reporters and the officials from the Royal Swedish Academy of Sciences. "Instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us. And it's going to be wonderful in many respects," Hinton said. "But we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control." The Nobel committee that honored the science behind machine learning and AI also mentioned fears about its possible flipside. Moon said that while it has "enormous benefits, its rapid development has also raised concerns about our future. Collectively, humans carry the responsibility for using this new technology in a safe and ethical way for the greatest benefit of humankind." Hinton shares those concerns. He quit a role at Google so he could more freely speak about the dangers of the technology he helped create. On Tuesday, he said he was shocked at the honor. "I'm flabbergasted. I had, no idea this would happen," he said when reached by the Nobel committee on the phone. There was no immediate reaction from Hopfield. Hinton, now 76, in the 1980s helped develop a technique known as backpropagation that has been instrumental in training machines how to "learn." His team at the University of Toronto later wowed peers by using a neural network to win the prestigious ImageNet computer vision competition in 2012. That win spawned a flurry of copycats, giving birth to the rise of modern AI. Hopfield, 91, created an associative memory that can store and reconstruct images and other types of patterns in data, the Nobel committee said. Hinton used Hopfield's network as the foundation for a new network that uses a different method, known as the Boltzmann machine, that the committee said can learn to recognize characteristic elements in a given type of data. Six days of Nobel announcements opened Monday with Americans Victor Ambros and Gary Ruvkun winning the medicine prize for their discovery of tiny bits of genetic material that serve as on and off switches inside cells that help control what the cells do and when they do it. If scientists can better understand how they work and how to manipulate them, it could one day lead to powerful treatments for diseases like cancer. The physics prize carries a cash award of 11 million Swedish kronor ($1 million) from a bequest left by the award's creator, Swedish inventor Alfred Nobel. The laureates are invited to receive their awards at ceremonies on Dec. 10, the anniversary of Nobel's death. Nobel announcements continue with the chemistry physics prize on Wednesday and literature on Thursday. The Nobel Peace Prize will be announced Friday and the economics award on Oct. 14.
[32]
Two artificial intelligence leaders win physics Nobel Prize
Oct. 8 (UPI) -- The Nobel Prize in physics was awarded on Tuesday to scientists a pair of scientists hailing from the United States and Canada for their work in artificial intelligence that has become the foundation of powerful machine learning. The Nobel committee said John Hopfield, of Princeton University, created an associative memory that can store and reconstruct images and other types of patterns in data while Geoffrey Hinton, of the University of Toronto, invented a method that can autonomously find properties in data and perform tasks such as identifying specific elements in pictures. "The laureates' work has already been of the greatest benefit," Ellen Moons, chair of the Nobel Committee for Physics, said in a statement. "In physics, we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties." The Nobel Committee said the idea of machine learning using artificial neural networks was inspired by how the human brain works. In the artificial neural network, the brain's neurons are represented by nodes that have different values and influence each other through connections. "This year's laureates have conducted important work with artificial neural networks from the 1980s onward," the committee said. The committee said the world is just now coming to recognize how the work of Hopfield and Hinton in laying down some of the crucial foundations of artificial intelligence has shaped the global world and will continue to do so. "With their breakthroughs, that stand on the foundations of physical science, they have shown a completely new way for us to use computers to aid and to guide us to tackle many of the challenges our society faces," the committee said. Hinton, known as the "Godfather of AI," made headlines last year when he quit Google to focus on AI threat issues and joined hundreds of tech leaders to sign a statement warning about the risk of AI without the proper guardrails.
[33]
Duo wins Physics Nobel for key breakthroughs in AI
American John Hopfield and British-Canadian Geoffrey Hinton won the Nobel Prize in Physics on Tuesday for pioneering work in the development of artificial intelligence. The pair were honored "for foundational discoveries and inventions that enable machine learning with artificial neural networks," the jury said. "These artificial neural networks have been used to advance research across physics topics as diverse as particle physics, material science and astrophysics," Ellen Moons, chair of the Nobel Committee for Physics, told a press conference. Moons also noted that these tools have also become part of our daily lives, including in facial recognition and language translation. While lauding the potential of AI, Moons noted that "its rapid development has also raised concerns about our future collectively." "Humans carry the responsibility for using this new technology in a safe and ethical way," she said. Hopfield, a 91-year-old professor at Princeton University, was spotlighted for having created "an associative memory that can store and reconstruct images and other types of patterns in data." 'Flabbergasted' The jury said Hinton, a 76-year-old professor at the University of Toronto, "invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures." "I'm flabbergasted, I had no idea this would happen," Hinton told reporters via a phone interview as the laureates were announced in Stockholm. Hinton said he was an avid user of AI tools such as ChatGPT, but also conceded that he had concerns about the potential impact of the technology he helped spawn. "In the same circumstances, I would do the same again, but I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control," the researcher added. The Nobel Prize in Physics is the second Nobel of the season after the Medicine Prize on Monday was awarded to American scientists Victor Ambros and Gary Ruvkun. The US duo were honored for their discovery of microRNA and its role in how genes are regulated. Awarded since 1901, the Nobel Prizes honor those who have, in the words of prize creator and scientist Alfred Nobel, "conferred the greatest benefit on humankind". Last year, the Nobel Prize in Physics went to France's Pierre Agostini, Hungarian-Austrian Ferenc Krausz and Franco-Swede Anne L'Huillier for research using ultra quick light flashes that enable the study of electrons inside atoms and molecules. The physics prize will be followed by the chemistry prize on Wednesday, with the highly watched literature and peace prizes to be announced on Thursday and Friday respectively. The economics prize wraps up the 2024 Nobel season on October 14. The winners will receive their prize, consisting of a diploma, a gold medal and a $1 million check, from King Carl XVI Gustaf at a formal ceremony in Stockholm on December 10, the anniversary of the 1896 death of scientist Alfred Nobel who created the prizes in his last will and testament.
[34]
Nobel Prize in physics awarded to 2 scientists for discoveries that enabled artificial intelligence
STOCKHOLM -- Two pioneers of artificial intelligence - John Hopfield and Geoffrey Hinton - won the Nobel Prize in physics Tuesday for helping create the building blocks of machine learning that is revolutionizing the way we work and live but also creates new threats to humanity, one of the winners said. Hinton, who is known as the Godfather of artificial intelligence, is a citizen of Canada and Britain who works at the University of Toronto and Hopfield is an American working at Princeton. "This year's two Nobel Laureates in physics have used tools from physics to develop methods that are the foundation of today's powerful machine learning," the Nobel committee said in a press release. Ellen Moons, a member of the Nobel committee at the Royal Swedish Academy of Sciences, said the two laureates "used fundamental concepts from statistical physics to design artificial neural networks that function as associative memories and find patterns in large data sets." She said that such networks have been used to advance research in physics and "have also become part of our daily lives, for instance in facial recognition and language translation." While the committee honored the science behind machine learning and AI, Moons also mentioned its flipside, saying that "while machine learning has enormous benefits, its rapid development has also raised concerns about our future. Collectively, humans carry the responsibility for using this new technology in a safe and ethical way for the greatest benefit of humankind." Hinton shares those concerns. He quit a role at Google so he could more freely speak about the dangers of the technology he helped create. On Tuesday, he said he was shocked at the honor. "I'm flabbergasted. I had, no idea this would happen," he said when reached by the Nobel committee on the phone. Hinton predicted that AI will end up having a "huge influence" on civilization, bringing improvements in productivity and health care. "It would be comparable with the Industrial Revolution," he said in the open call with reporters and the officials from the Royal Swedish Academy of Sciences. "Instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us. And it's going to be wonderful in many respects," Hinton said. "But we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control." Six days of Nobel announcements opened Monday with Americans Victor Ambros and Gary Ruvkun winning the medicine prize for their discovery of tiny bits of genetic material that serve as on and off switches inside cells that help control what the cells do and when they do it. If scientists can better understand how they work and how to manipulate them, it could one day lead to powerful treatments for diseases like cancer. The physics prize carries a cash award of 11 million Swedish kronor ($1 million) from a bequest left by the award's creator, Swedish inventor Alfred Nobel. The laureates are invited to receive their awards at ceremonies on Dec. 10, the anniversary of Nobel's death. Nobel announcements continue with the chemistry physics prize on Wednesday and literature on Thursday. The Nobel Peace Prize will be announced Friday and the economics award on Oct. 14.
[35]
Nobel Prize awarded to pioneers of artificial intelligence
Hopfield's 'associative memory' can store patterns and recreate them and served as the basis of today's AI is trained. Two scientists were awarded the 2024 Nobel Prize in Physics for their contributions towards the foundation for today's powerful artificial intelligence and machine learning models. Laureates John Hopfield and Geoffrey Hinton were honoured with the award today (8 October) for their work, starting in the 1980s, in helping create machines that can mimic functions such as memory and learning and developing the earliest models that today's generative AI is based upon. The way in which machine learning models are trained was inspired by how human brains learn, connecting nodes, or a small pieces of information, with connectors, similar to how brain synapses work in humans. This hypothesis likened a human neural network to an artificial neural network and forms the basis of how computer models are trained today. Hopfield published his discovery of 'associative memory,' also called the 'Hopfield Network' in 1982, a network that can store patterns and recreate them. When trained on an image, the network can check a different image that was input and make corrections to it to match the first. The Hopfield Network often reproduced the original image on which it was trained. The network could recreate data that contained 'noise' (wrong or unnecessary data), or data which had been partially erased. Hopfield's associative memory gives perspective on today's large language models built on networks similar to, but much larger than Hopfield's initial discovery. Hinton, who had previously studied experimental psychology and artificial intelligence, picked up from the Hopfield Network and built the Boltzmann machine, an early example of a generative model, using statistical physics. The Boltzmann machine, which was published in 1985, utilised an equation by the 19th century physicist Ludwig Boltzmann, and can learn from being given examples. A trained Boltzmann machine can recognise familiar traits in data it has not previously seen, much like humans who can recognise traits of something familiar in an entirely new object. In a similar way, the Boltzmann machine can recognise an entirely new example if it belongs to a category found in the data it was trained on, and differentiate it from anything dissimilar. Hinton continued his work into artificial neural networks even when the industry seemingly lost interest in the 1990s. However, the industry regained renewed interest in the 2010s. Hopfield and Hinton's work fed into the AI and machine learning boom that humanity is currently in, with mass improvements being made to how machines process data. Generative AI has made further developments, being able to process complex human languages and vast amounts of data. Prof Peter Gallagher, the head of astrophysics and director of Dunsink Observatory at the Dublin Institute for Advanced Studies (DIAS), said machine learning is "transforming how researchers in space science and astrophysics are analysing and interpreting complex datasets". "Machine learning allows us to automatically find and characterise large numbers of solar radio bursts that would be impossible to achieve by eye", he said. Rhodri Cusack, a neuroscience professor at Trinity College Dublin, said AI neural networks have proven to be valuable models of processes in the brain. "In short, machines are helping us understand ourselves, which in turn provides new avenues for technology. None of this would be possible without the seminal work of Hopfield and Hinton". Earlier this year, Hinton, widely considered the 'Godfather of AI', was awarded the Ulysses Medal by University College Dublin (UCD) for his contributions to society through backpropagation - a way of training artificial neural networks to be more accurate by feeding error rates back through them, reducing the need for continued input from a human. However, he also left his role at Google last year to be more vocal about the dangers of AI. He said that with the flood of data created by generative AI, an average person will "not be able to know what is true anymore". Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[36]
Nobel prize in physics awarded to AI pioneers John Hopfield, Geoffrey Hinton
On Tuesday, two leaders in artificial intelligence, John Hopfield and Geoffrey Hinton, were awarded the Nobel Prize in Physics for their groundbreaking contributions to machine learning. This achievement has revolutionized how we live and work while also presenting potential dangers, one of the winners highlighted. Hinton, often referred to as the "Godfather of AI," holds Canadian and British citizenship and works at the University of Toronto. John Hopfield, an American scientist, is based at Princeton University. Their work has laid the foundation for machine learning technology, which powers many of the AI tools we use today.
[37]
Two AI pioneers win the Nobel Prize for their work in machine learning
Two artificial intelligence pioneers were awarded the Nobel Prize for their work in machine learning, which laid the foundation for the current AI boom. Geoffrey Hinton, also known as the "godfather of AI," and John Hopfield were named as the 2024 winners of the Nobel Prize in Physics on Tuesday. Hinton and Hopfield, who both started their work in machine learning in the 1980s, were awarded the prize "for foundational discoveries and inventions that enable machine learning with artificial neural networks," the Royal Swedish Academy of Sciences said in a statement. Hopfield is known for inventing a network used in machine learning called the "Hopfield network," which is used for storing and reconstructing images and other patterns in data using physics, according to the Royal Swedish Academy of Sciences. Hopfield's network was then used by Hinton as the foundation for a new network that uses statistical physics, called the "Boltzmann machine," which "can learn to recognize characteristic elements in a given type of data," the Royal Swedish Academy of Sciences said. "The laureates' work has already been of the greatest benefit," Ellen Moons, Chair of the Nobel Committee for Physics, said. "In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties." Last May, Hinton left his job on Google's (GOOGL) AI research team to talk openly about his concerns over the risks of AI. "I console myself with the normal excuse: If I hadn't done it, somebody else would have," Hinton told The New York Times (NYT). On Tuesday, Hinton said in response to questions about regrets over his work, that he "would do the same again, but I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control," Bloomberg reported. Hopfield and Hinton will share the prize of 11 million Swedish kronor, or $1 million.
[38]
"Godfather of AI" Geoffrey Hinton, Princeton University scientist John Hopfield win Nobel physics prize
A Canadian scientist who once warned artificial intelligence "might take over" if not developed responsibly joined a Princeton University professor in winning the Nobel Prize in physics Tuesday. The awarders of the prize, the Nobel committee at the Royal Swedish Academy of Sciences, said the two winners - John Hopfield and Geoffrey Hinton, known as the "godfather of artificial intelligence" - helped create the building blocks of machine learning. AI is revolutionizing work but also poses new threats to humanity, Hinton said. Ellen Moons, a member of the Nobel committee at the Royal Swedish Academy of Sciences, said the two laureates "used fundamental concepts from statistical physics to design artificial neural networks that function as associative memories and find patterns in large data sets." She said that such networks have been used to advance research in physics and "have also become part of our daily lives, for instance in facial recognition and language translation." While the committee honored the science behind machine learning and AI, Moons also mentioned its flipside, saying that "while machine learning has enormous benefits, its rapid development has also raised concerns about our future. Collectively, humans carry the responsibility for using this new technology in a safe and ethical way for the greatest benefit of humankind." Hinton shares those concerns. He quit a role at Google so he could more freely speak about the dangers of the technology he helped create. On Tuesday, he said he was shocked at the honor. "I'm flabbergasted. I had, no idea this would happen," he said when reached by the Nobel committee on the phone. Hinton, a citizen of Canada and Britain who works at the University of Toronto, predicted that AI will end up having a "huge influence" on civilization, bringing improvements in productivity and health care. "It would be comparable with the Industrial Revolution," he said in the open call with reporters and the officials from the Royal Swedish Academy of Sciences. "Instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us. And it's going to be wonderful in many respects," Hinton said. "But we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control." Hopfield is a Swarthmore College alumnus who received his PhD from Cornell University. At Princeton, he is the Howard A. Prior Professor in the Life Sciences, emeritus and an emeritus professor of molecular biology. He also holds positions in physics and neuroscience. Hopfield, who left Princeton for Caltech in 1980, developed the "Hopfield network," an artificial neural network that can find and store patterns in information in a way that mimics the brain. In a 1982 article, he wrote about using a network with 30 nodes similar to the brain's neurons, and a total of 500 parameters to keep track of. He tried working with a network of 100 nodes, but the computer he was using at the time couldn't handle it. Compare that to today's large language models such as ChatGPT, which can make use of billions or even a trillion parameters. "Thanks to their work from the 1980s and onward, John Hopfield and Geoffrey Hinton have helped lay the foundation for the machine learning revolution that started around 2010," the Nobel committee said. Six days of Nobel announcements opened Monday with Americans Victor Ambros and Gary Ruvkun winning the medicine prize for their discovery of tiny bits of genetic material that serve as on and off switches inside cells that help control what the cells do and when they do it. If scientists can better understand how they work and how to manipulate them, it could one day lead to powerful treatments for diseases like cancer. The physics prize carries a cash award of 11 million Swedish kronor ($1 million) from a bequest left by the award's creator, Swedish inventor Alfred Nobel. The laureates are invited to receive their awards at ceremonies on Dec. 10, the anniversary of Nobel's death. Nobel announcements continue with the chemistry physics prize on Wednesday and literature on Thursday. The Nobel Peace Prize will be announced Friday and the economics award on Oct. 14.
[39]
Machine learning pioneers, including the 'Godfather of AI,' are awarded the Nobel Prize in Physics
Geoffrey Hinton, one of the recipients, left Google in 2023 for ethical reasons. Two scientists have been the Nobel Prize in Physics "for foundational discoveries and inventions that enable with artificial neural networks." John Hopfield, an emeritus professor of Princeton University, devised an associative memory that's able to store and reconstruct images and other types of patterns in data. Geoffrey Hinton, who has been dubbed the "Godfather of ," pioneered a way to autonomously find properties in data, leading to the ability to identify certain elements in pictures. "This year's physics laureates' breakthroughs stand on the foundations of physical science. They have showed a completely new way for us to use computers to aid and to guide us to tackle many of the challenges our society face," the committee "Thanks to their work humanity now has a new item in its toolbox, which we can choose to use for good purposes. Machine learning based on artificial neural networks is currently revolutionizing science, engineering and daily life." However, Hinton has grown concerned about machine learning and its potential impact on society. He was part of Google's deep-learning artificial intelligence team (Google Brain, which merged with DeepMind last year) for many years before so he could "freely speak out about the risks of AI." At the time, he expressed concern about generative AI spurring a tsunami of misinformation and having the potential to wipe out jobs, along with the possibility of fully autonomous weapons emerging. Although Hinton acknowledged the likelihood that machine learning and AI will improve health care, "it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us," he told reporters, according to . That said, Hinton, a and professor of computer science at the University of Toronto, was "flabbergasted" to learn that he had become a Nobel Prize laureate.
[40]
Scientists who built 'foundation' for AI awarded Nobel Prize
Two scientists credited with laying the "foundation of today's powerful machine learning," University of Toronto professor emeritus Geoffrey Hinton and Princeton University professor John Hopfield, were awarded the Nobel Prize in physics today. Their discoveries and inventions laid the groundwork for many of the recent breakthroughs in artificial intelligence, the Nobel committee at the Royal Swedish Academy of Sciences said. Since the 1980s, their work has enabled the creation of artificial neural networks, computer architecture loosely modeled after the the structure of the brain. By mimicking the way our brains make connections, neural networks allow AI tools to essentially "learn by example." Developers can train an artificial neural network to recognize complex patterns by feeding it data, undergirding some of the most high-profile uses of AI today, from language generation to image recognition. "I had no expectations of this. I am extremely surprised and I'm honoured to be included," a "flabbergasted" Hinton said in a University of Toronto news release. Hinton, often called "The Godfather of AI," told the New York Times last year that "a part of him ... now regrets his life's work." He reportedly left his post at Google in 2023 in order to be able to call attention to the potential risks posed by the technology he was instrumental in bringing to fruition. "It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said in the NYT interview. In 2013, Google acquired Hinton's neural networks company, which he started with two students including Ilya Sutskever, who would later become chief scientist at OpenAI before leaving this year. The Nobel committee recognized Hinton for developing what's called the Boltzmann machine, a generative model, with colleagues in the 1980s: Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning. Hinton's work builds on fellow awardee John Hopfield's Hopfield network, an artificial neural network that can recreate patterns: The Hopfield network utilises physics that describes a material's characteristics due to its atomic spin - a property that makes each atom a tiny magnet. The network as a whole is described in a manner equivalent to the energy in the spin system found in physics, and is trained by finding values for the connections between the nodes so that the saved images have low energy. When the Hopfield network is fed a distorted or incomplete image, it methodically works through the nodes and updates their values so the network's energy falls. The network thus works stepwise to find the saved image that is most like the imperfect one it was fed with. Hinton continues to raise his concerns with AI, including in a call today with reporters. "We have no experience of what it's like to have things smarter than us. And it's going to be wonderful in many respects," he said. "But we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control."
[41]
Here's Why These Two Scientists Won the $1.06 Million 2024 Nobel Prize in Physics
The prizewinners share a reward totaling 11 million Swedish kroner or $1.06 million U.S. dollars. Two North America-based academics used physics to help pioneer advances in AI -- and now they're being rewarded for their contributions with the 2024 Nobel Prize in Physics. The Royal Swedish Academy of Sciences announced on Tuesday that Princeton emeritus professor John Hopfield and University of Toronto professor Geoffrey E. Hinton were the latest Physics Nobel Prize winners. Hopfield was honored for inventing the Hopfield Neural Network, a system that stores and recreates patterns in data, in 1982. Hinton, who has been called "The Godfather of AI," used the Hopfield network as the basis for a network of his own, the Boltzmann machine, which he co-invented in 1985. The machine can identify properties in data and be used for tasks like finding hidden features in data. Related: This Woman Was Honored By Queen Elizabeth For Writing 1,750 Bios For Women Scientists on Wikipedia "The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained," the Nobel Prize committee stated in a post on X. "Hinton has built upon this work, helping initiate the current explosive development of machine learning." The prizewinners share a reward totaling 11 million Swedish kroner or $1.06 million U.S. dollars. Hinton, who is 76 years old, shared that he was "in a cheap hotel in California" when he heard the news. "I was going to have an MRI scan today but I'll have to cancel that!" he said at Tuesday's press conference. Hopfield, who is 91, was based in a cottage in England when he found out. "My wife and I went out to get a flu shot and stopped to get a coffee on the way back home," he told Princeton University's Office of Communications. When he arrived home, he had "a pile of emails" that he said were "heartwarming." Related: This Former Teacher Started a Side Hustle That Made More Than $22,000 in One Month: 'I Have Never Been More Fulfilled' The Nobel Prize winners in Chemistry, Literature, and Peace will be announced later this week while the prize for economic sciences will be announced next week. The Nobel Prize in Physiology or Medicine was awarded Monday to UMass Chan Medical School professor Victor Ambros and Harvard Medical School professor Gary Ruvkun for their discovery of microRNA, which is crucial for gene regulation.
[42]
AI pioneers win Nobel Prize in physics
Two pioneers of artificial intelligence -- John Hopfield and Geoffrey Hinton -- won the Nobel Prize in physics Tuesday for helping create the building blocks of machine learning that is revolutionizing the way we work and live but also creates new threats for humanity. Hinton, who is known as the godfather of artificial intelligence, is a citizen of Canada and Britain who works at the University of Toronto, and Hopfield is an American working at Princeton. "These two gentlemen were really the pioneers," said Nobel physics committee member Mark Pearce. The artificial neural networks -- interconnected computer nodes inspired by neurons in the human brain -- the researchers pioneered are used throughout science and medicine and "have also become part of our daily lives," said Ellen Moons of the Nobel committee at the Royal Swedish Academy of Sciences.
[43]
AI pioneers Geoffrey Hinton, John Hopfield win Nobel Prize
First-ever awarded for contributions to artificial intelligence If you needed another sign that we've well and truly entered the AI age, here it is: The first Nobel Prize has been awarded for contributions to artificial intelligence. AI "godfather" Dr. Geoffrey Hinton, and his intellectual predecessor in the realm of learning machines, Dr. John Hopfield, were jointly awarded the 2024 Nobel Prize for physics today "for foundational discoveries and inventions that enable machine learning with artificial neural networks," the Royal Swedish Academy of Sciences said. El Reg readers and AI watchers are likely familiar with Hinton's pioneering work on neural networks, and his high-profile departure from an advisory role at Google, driven by concerns over the potential dangers of the AI systems he helped create. Hopfield's work, on the other hand, is even more foundational to modern AI, and influenced Hinton's advancements. According to a write up [PDF] of the reasons for the award, Hopfield's greatest contribution to AI came in 1982 when he created a neural network (named after himself) capable of storing multiple patterns and retrieving them from memory by distinguishing between them. The Committee likened the "Hopfield network" to the brain's associative memory, where we search for and recall information, such as words. It described the network as a system of artificial neurons with varying connection strengths. "Hopfield described the overall state of the network with a property that is equivalent to the energy in the spin system found in physics; the energy is calculated using a formula that uses all the values of the nodes and all the strengths of the connections between them," it explained. By the time the entire network processes the data, it often reproduces the original image it was trained on, the Academy noted, but what made it truly special was its ability to store multiple pictures at the same time, and differentiate between them. From recall to interpretation While a Hopfield network and associative memory techniques can recall images as patterns in data, it can't interpret what they are. That's where Hinton came in. "When Hopfield published his article on associative memory, Geoffrey Hinton was working at Carnegie Mellon University in Pittsburgh, USA," the Academy said. "Along with his colleague, Terrence Sejnowski, Hinton started from the Hopfield network and expanded it to build something new, using ideas from statistical physics." "The states in which the individual components can jointly exist can be analyzed using statistical physics, and the probability of them occurring calculated," the Nobel awarding body said. Measure those probabilities and assign them to objects, and you have Hinton's Boltzmann machine. Another neural network, but a far more advanced one, the Boltzmann machine can learn from examples of data, recognize familiar traits across samples, and recognize new examples of an object by filtering known basics into various categories. Boltzmann machines are still used today to power recommendation engines and other basic AIs, and are frequently a part of larger machine learning networks. "The laureates' work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties," Ellen Moons, Chair of the Nobel Committee for the physics prize said. From interpretation to worry It's the second major award Hinton's won for his contributions to AI, after sharing the 2019 Turing Prize - often called the Nobel of computing - with fellow "AI godfathers" Yoshua Bengio and Yann LeCun. Since then, however, Hinton has become downright skeptical of the learning machines he helped create. After leaving Google in May 2023, Hinton expressed regret for his role in laying the foundation for modern AI, saying that when he looked at AI's growth to date and how it was likely to impact society in the future, the possibilities were "scary." "I console myself with the normal excuse: if I hadn't done it, somebody else would have," Hinton told the New York Times' Cade Metz, a former Register journalist, last year. Hinton was joined by Bengio in signing an open letter last year calling for regulation of AI to prevent future harms. The letter likened AI to the threat of climate change, saying that while we ignored those warnings, we could head off trouble with AI before it's too late. "There is a responsible path, if we have the wisdom to take it," the letter begs. Hinton reiterated his concerns about AI in an interview shortly after he found out he won the Nobel. "I wish I had a sort of simple recipe that if you do this, everything's going to be okay. But I don't," Hinton told Nobel Prize Outreach's chief science officer Adam Smith. "We're a kind of bifurcation point in history where in the next few years we need to figure out if there's a way to deal with that threat [of AI running amok]." "One thing governments can do is force the big companies to spend a lot more of their resources on safety research," Hinton added. "Companies like OpenAI can't just put safety research on the back burner." Hinton and Hopfield will share a prize of 11 million Swedish kronor (about $1 million), and will receive their awards on December 10, the anniversary of dynamite inventor and prize namesake Alfred Nobel's death in 1896. ®
[44]
A Nobel Prize for Artificial Intelligence
The list of Nobel laureates reads like a collection of humanity's greatest treasures: Albert Einstein, Marie Curie, Francis Crick, Toni Morrison. As of this morning, it also includes two physicists whose research, in the 1980s, laid the foundations for modern artificial intelligence. Earlier today, the 2024 Nobel Prize in Physics was awarded to John Hopfield and Geoffrey Hinton for using "tools from physics to develop methods that are the foundation of today's powerful machine learning." Hinton is sometimes referred to as a "godfather of AI," and today's prize -- one that is intended for those whose work has conferred "the greatest benefit to humankind" -- would seem to mark the generative-AI revolution, and tech executives' grand pronouncements about the prosperity that ChatGPT and its brethren are bringing, as a fait accompli. Not so fast. Committee members announcing the prize, while gesturing to generative AI, did not mention ChatGPT. Instead, their focus was on the grounded ways in which Hopfield and Hinton's research, which enabled the statistical analysis of enormous datasets, has transformed physics, chemistry, biology, and more. As I wrote in an article today, the award "should not be taken as a prediction of a science-fictional utopia or dystopia to come so much as a recognition of all the ways that AI has already changed the world." AI models will continue to change the world, but AI's proven applications should not be confused with Big Tech's prophecies. Machines that can "learn" from large datasets are the stuff of yesterday's news, and superintelligent machines that replace humans remain the stuff of yesterday's novels. Let's not forget that. A couple weeks ago, I had the pleasure of speaking with Terence Tao, perhaps the world's greatest living mathematician, about his perceptions of today's generative AI and his vision for an entirely new, "industrial-scale" mathematics that AI could one day enable. I found our conversation fascinating, and hope you will as well.
[45]
The godfathers of Artificial Intelligence have just won the Nobel Prize in Physics - Softonic
Research on neural networks paved the way for current AI systems like ChatGPT The Nobel Prize in Physics has been awarded to two scientists, Geoffrey Hinton and John Hopfield, for their work on machine learning. The announcement was made by the Royal Swedish Academy of Sciences at a press conference in Stockholm, Sweden. Curiously, the British-Canadian professor Geoffrey Hinton, known as the Godfather of Artificial Intelligence, was surprised, as he resigned from Google in 2023 and has since warned about the dangers of machines that could be more intelligent than humans. For his part, the American professor John Hopfield is a professor at Princeton University (USA) and the British-Canadian Geoffrey Hinton is a professor at the University of Toronto (Canada). Machine learning is key to artificial intelligence, as it develops how a computer can be trained to generate information. This is the engine behind a wide range of technologies we use today, from how we search the Internet to how we edit photos on our phones. The Academy listed some of the crucial applications of the work of both scientists, such as the improvement of climate modeling, the development of solar cells, and the analysis of medical images. Professor Hinton's groundbreaking research on neural networks paved the way for current AI systems like ChatGPT. In artificial intelligence, neural networks are systems similar to the human brain in their way of learning and processing information. They allow AI to learn from experience, just as a person would. This is known as deep learning. Professor Hinton said that his work on artificial neural networks was revolutionary. But he also expressed concern about the future and stated that he would do the same work again, "but I worry that the overall consequences of this could be systems more intelligent than us that end up taking control." As Nobel Prize winners, the two professors share a monetary award of 11 million Swedish kronor (equivalent to 1 million dollars).
[46]
Pioneers in AI win the Nobel Prize in physics
Two pioneers of artificial intelligence - John Hopfield and Geoffrey Hinton - won the Nobel Prize in physics Tuesday for helping create the building blocks of machine learning that is revolutionizing the way we work and live but also creates new threats for humanity.Two pioneers of artificial intelligence - John Hopfield and Geoffrey Hinton - won the Nobel Prize in physics Tuesday for helping create the building blocks of machine learning that is revolutionizing the way we work and live but also creates new threats for humanity. Hinton, who is known as the godfather of artificial intelligence, is a citizen of Canada and Britain who works at the University of Toronto, and Hopfield is an American working at Princeton. "These two gentlemen were really the pioneers," said Nobel physics committee member Mark Pearce. The artificial neural networks - interconnected computer nodes inspired by neurons in the human brain - the researchers pioneered are used throughout science and medicine and "have also become part of our daily lives," said Ellen Moons of the Nobel committee at the Royal Swedish Academy of Sciences. Hopfield, whose 1982 work laid the groundwork for Hinton's, told The Associated Press, "I continue to be amazed by the impact it has had." Hinton predicted that AI will end up having a "huge influence" on civilization, bringing improvements in productivity and health care. "It would be comparable with the Industrial Revolution," he said in an open call with reporters and officials of the Royal Swedish Academy of Sciences. "We have no experience of what it's like to have things smarter than us. And it's going to be wonderful in many respects," Hinton said. "But we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control." Warning of AI risks The Nobel committee also mentioned fears about the possible flipside. Moons said that while it has "enormous benefits, its rapid development has also raised concerns about our future. Collectively, humans carry the responsibility for using this new technology in a safe and ethical way for the greatest benefit of humankind." Hinton, who quit a role at Google so he could speak more freely about the dangers of the technology he helped create, shares those concerns. "I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control," Hinton said. For his part, Hopfield, who signed early petitions by researchers calling for strong control of the technology, compared the risks and benefits to work on viruses and nuclear energy, capable of helping and harming society. At a Princeton news conference, he made reference to the concerns, bringing up the dystopia imagined in George Orwell's "1984," or the fictional apocalypse inadvertently created by a Nobel-winning physicist in Kurt Vonnegut's "Cat's Cradle." Neither winner was home to get the call Hopfield, who was staying with his wife at a cottage in Hampshire, England, said that after grabbing coffee and getting his flu shot, he opened his computer to a flurry of activity. "I've never seen that many emails in my life," he said. A bottle of champagne and bowl of soup were waiting, he added, but he doubted there were any fellow physicists in town to join the celebration. Hinton said he was shocked at the honor. "I'm flabbergasted. I had no idea this would happen," he said when reached by the Nobel committee on the phone. He said he was at a cheap hotel with no internet. Hinton's work considered 'the birth' of AI Hinton, 76, helped develop a technique in the 1980s known as backpropagation instrumental in training machines how to "learn" by fine-tuning errors until they disappear. It's similar to the way a student learns, with an initial solution graded and flaws identified and returned to be fixed and repaired. This process continues until the answer matches the network's version of reality. Hinton had an unconventional background as a psychologist who also dabbled in carpentry and was genuinely curious about how the mind works, said protege Nick Frosst, who was Hinton's first hire at Google's AI division in Toronto. His "playfulness and genuine interest in answering fundamental questions I think is key to his success as a scientist," Frosst said. Nor did he stop at his pioneering 1980s work. "He's been consistently trying out crazy things and some of them work very well and some of them don't," Frosst said. "But they all have contributed to the success of the field and galvanized other researchers to try new things as well." Hinton's team at the University of Toronto wowed peers by using a neural network to win the prestigious ImageNet computer vision competition in 2012. That spawned a flurry of copycats and was "a very, very significant moment in hindsight and in the course of AI history," said Stanford University computer scientist and ImageNet creator Fei-Fei Li. "Many people consider that the birth of modern AI," she said. Hinton and fellow AI scientists Yoshua Bengio and Yann LeCun won computer science's top prize, the Turing Award, in 2019. "For a long time, people thought what the three of us were doing was nonsense," Hinton told told the AP in 2019. "My message to young researchers is, don't be put off if everyone tells you what you are doing is silly." Many of Hinton's former students and collaborators followed him into the tech industry as it began capitalizing on AI innovations, and some started their own AI companies, including Frosst's Cohere and ChatGPT maker OpenAI. Hinton said he uses machine learning tools in his daily life. "Whenever I want to know the answer to anything, I just go and ask GPT-4," Hinton said at the Nobel announcement. "I don't totally trust it because it can hallucinate, but on almost everything it's a not-very-good expert. And that's very useful." Physics prize for pioneer AI work is significant Hopfield, 91, created an associative memory that can store and reconstruct images and other types of patterns in data, the Nobel committee said. Just as Hinton came to the field from psychology, Hopfield stressed how cutting edge science comes from crossing the borders of scientific fields like physics, biology and chemistry instead of researchers staying in their lane. It's why this prize is a physics prize, he said, pointing out that his neural network borrows from condensed matter physics. With big complex problems in scientific fields, "if you are not motivated by physics, you just don't tackle the class of problems," Hopfield said. While there's no Nobel for computer science, Li said that awarding a traditional science prize to AI pioneers is significant and shows how boundaries between disciplines have blurred. Disagreement on AI risks Not all of their peers agree with the Nobel laureates about the risks of the technology they helped create. Frosst has had many "spirited debates" with Hinton about AI's risks and disagrees with some of Hinton's warnings but not his willingness to publicly address them. "Mostly we disagree on timescale and on the particular technology that he's sounding the alarm on," Frosst said. "I don't think that neural nets and language models as they exist today pose an existential risk." Bengio, who has long sounded alarms about AI risks, said what really alarms him and Hinton is "loss of human control" and whether AI systems will act morally when they're smarter than humans. "We don't know the answer to these questions," he said. "And we should make sure we do before we build those machines." Asked whether the Nobel committee might have factored in Hinton's warnings when deciding on the award, Bengio dismissed that, saying "we're talking about very early work when we thought that everything would be rosy." Six days of Nobel announcements opened Monday with Americans Victor Ambros and Gary Ruvkun winning the medicine prize. They continue with the chemistry prize Wednesday and literature on Thursday. The Nobel Peace Prize will be announced Friday and the economics award on Oct. 14. The prize carries a cash award of 11 million Swedish kronor ($1 million) from a bequest left by the award's creator, Swedish inventor Alfred Nobel. The laureates are invited to receive their awards at ceremonies on Dec. 10, the anniversary of Nobel's death.
[47]
Of Course AI Just Got a Nobel Prize
When the Swedish inventor Alfred Nobel wrote his will in 1895, he designated funds to reward those who "have conferred the greatest benefit to humankind." The resulting Nobel Prizes have since been awarded to the discoverers of penicillin, X-rays, and the structure of DNA -- and, as of today, to two scientists who, decades ago, laid the foundations for modern artificial intelligence. Today, John Hopfield and Geoffrey Hinton received the Nobel Prize in Physics for groundbreaking statistical methods that have advanced physics, chemistry, biology, and more. In the announcement, Ellen Moons, the chair of the Nobel Committee for Physics and a physicist at Karlstad University, celebrated the two laureates' work, which used "fundamental concepts from statistical physics to design artificial neural networks" that can "find patterns in large data sets." She mentioned applications of their research in astrophysics and medical diagnosis, as well as in daily technologies such as facial recognition and language translation. She even alluded to the changes and challenges that AI may bring in the future. But she did not mention ChatGPT, widespread automation and the resulting global economic upheaval or prosperity, or the possibility of eliminating all disease with AI, as tech executives are wont to do. Hopfield's and Hinton's respective research did lay the groundwork for the generative-AI revolution that Google CEO Sundar Pichai has compared to the harnessing of fire. In 1982, Hopfield invented a way for computer programs to store and recall patterns, reminiscent of human memory, and three years later, Hinton devised a way for programs to detect patterns from a set of examples. Those two methods and subsequent advances enabled this century's machine-learning revolution, which is built upon machines that detect, store, and reproduce statistical patterns from huge amounts of data, such as genetic sequences, weather forecasts, and internet text. The Nobel committee focused its remarks on the foundational aspects of artificial neural networks: the ability to feed unfathomably large and complex amounts of data into an algorithm that will then, more or less undirected, detect previously unseen and consequential patterns in those data. As a result, drug discovery, neuroscience, renewable-energy research, and particle physics are fundamentally changing. Last year, a biomedical researcher at Harvard told me, "We can really make discoveries that would not be possible without the use of AI." All sorts of nonchatbot algorithms across the internet, on social-media and e-commerce and media websites, use neural networks. In a presentation about today's award, the theoretical physicist Anders Irbäck, another committee member, noted how these neural networks have been applied in astrophysics, materials science, climate modeling, and molecular biology. Following the announcement, journalists were eager to ask about generative AI and ChatGPT, and Hinton -- who has frequently voiced fears of an AI apocalypse -- likened its influence to that of the Industrial Revolution. "We have no experience of what it's like to have things smarter than us," Hinton, who called into the ceremony, said. But the two committee members giving answers, Moons and Irbäck, demurred on questions about "GPT" and danced around Hinton's doomerism. Today's award, in other words, should not feed the AI-hype cycle. It is a celebration of the ways in which machine-learning research "benefits all of humanity," to borrow OpenAI's phrase, in largely unseen, grounded ways that are no less important for that pragmatism. The prize should not be taken as a prediction of a science-fictional utopia or dystopia to come so much as a recognition of all the ways that AI has already changed the world.
[48]
Nobel Prize in physics awarded to 2 scientists for discoveries that enabled artificial intelligence
John Hopfield and Geoffrey Hinton, seen in picture, are awarded this year's Nobel Prize in Physics, which is announced at a press conference by Hans Ellergren, center, permanent secretary at the Swedish Academy of Sciences in Stockholm, Sweden, Tuesday. AP-Yonhap Two pioneers of artificial intelligence - John Hopfield and Geoffrey Hinton - won the Nobel Prize in physics Tuesday for helping create the building blocks of machine learning that is revolutionizing the way we work and live but also creates new threats to humanity, one of the winners said. Hinton, who is known as the Godfather of artificial intelligence , is a citizen of Canada and Britain who works at the University of Toronto and Hopfield is an American working at Princeton. "This year's two Nobel Laureates in physics have used tools from physics to develop methods that are the foundation of today's powerful machine learning," the Nobel committee said in a press release. Ellen Moons, a member of the Nobel committee at the Royal Swedish Academy of Sciences, said the two laureates "used fundamental concepts from statistical physics to design artificial neural networks that function as associative memories and find patterns in large data sets." She said that such networks have been used to advance research in physics and "have also become part of our daily lives, for instance in facial recognition and language translation." While the committee honored the science behind machine learning and AI, Moons also mentioned its flipside, saying that "while machine learning has enormous benefits, its rapid development has also raised concerns about our future. Collectively, humans carry the responsibility for using this new technology in a safe and ethical way for the greatest benefit of humankind." Hinton shares those concerns. He quit a role at Google so he could more freely speak about the dangers of the technology he helped create. On Tuesday, he said he was shocked at the honor. "I'm flabbergasted. I had, no idea this would happen," he said when reached by the Nobel committee on the phone. Hinton predicted that AI will end up having a "huge influence" on civilization, bringing improvements in productivity and health care. "It would be comparable with the Industrial Revolution," he said in the open call with reporters and the officials from the Royal Swedish Academy of Sciences. "Instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us. And it's going to be wonderful in many respects," Hinton said. "But we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control." Six days of Nobel announcements opened Monday with Americans Victor Ambros and Gary Ruvkun winning the medicine prize for their discovery of tiny bits of genetic material that serve as on and off switches inside cells that help control what the cells do and when they do it. If scientists can better understand how they work and how to manipulate them, it could one day lead to powerful treatments for diseases like cancer. The physics prize carries a cash award of 11 million Swedish kronor ($1 million) from a bequest left by the award's creator, Swedish inventor Alfred Nobel. The laureates are invited to receive their awards at ceremonies on Dec. 10, the anniversary of Nobel's death. Nobel announcements continue with the chemistry physics prize on Wednesday and literature on Thursday. The Nobel Peace Prize will be announced Friday and the economics award on Oct. 14. (AP)
[49]
Nobel prize in physics given to researchers for pioneering work on machine learning
John Hopfield at Princeton University and Geoffrey Hinton at the University of Toronto honoured for pioneering work on artificial neural networks The Nobel prize in physics has been awarded to two researchers for their groundbreaking work on machines that learn. John Hopfield at Princeton University and Geoffrey Hinton at the University of Toronto were honoured for their pioneering work on artificial neural networks which underlie much of modern artificial intelligence. Announced by the Royal Swedish Academy of Sciences in Stockholm, the winners share 11m Swedish kronor (about £810,000). The Nobel committee said the prize was given "for foundational discoveries and inventions that enable machine learning with artificial neural networks".
[50]
'Godfather of A.I.' Geoffrey Hinton Wins the Nobel Prize in Physics
Hinton said he is "someone who doesn't really know what field he's in but would like to understand how the brain works." Geoffrey Hinton, a University of Toronto professor hailed as the "Godfather of A.I." for his contributions to the technology, wasn't expecting a call this morning (Oct. 8) declaring him a winner of the Nobel Prize in Physics. "I had no idea I'd even been nominated," said Hinton in an interview after the announcement, adding that his first thought was "how could I be sure it wasn't a spoof call." Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters Hinton co-won the annual prize with John Hopfield of Princeton University for their decades of work in training artificial neural networks, which are inspired by the structure of the human brain and helped usher in the current A.I. revolution. The two researchers will share 11 million Swedish krona ($1 million) in prize money. "In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties," said Ellen Moons, chair of the Nobel Committee for Physics, in a statement. Hopfield, who has also worked at institutions like Bell Laboratories, the California Institute of Technology and the University of California, Berkeley, was honored for his invention of the Hopfield network, a neural network model that helps machines store information. He, too, was surprised to receive the news that he is now a Nobel Prize laureate. After going out for a coffee and flu shot this morning, the American scientist returned home to a pile of emails that were "astounding" and "heartwarming," according to a press release from Princeton University. Building upon the Hopfield network, Hinton helped develop the Boltzmann machine to classify images and create new examples of patterns. He previously received the Turing Award, a prize often dubbed the "Nobel prize of computer science," alongside A.I. researchers Yoshua Bengio and Yann LeCun in 2019 for their work on neural networks. Originally from England, Hinton moved to the U.S. in the 1970s and eventually Canada in the 1980s. He took a role at Google (GOOGL) in 2013 after a startup he created with two University of Toronto students was acquired by the company for $44 million. When asked whether he considers himself a computer scientist of a physicist, Hinton said he is "someone who doesn't really know what field he's in but would like to understand how the brain works." Hinton left his position at Google last year, in part to speak more freely about his growing concerns surrounding the power of A.I. Despite lauding its potential to create solutions in fields like health care, Hinton has also advocated for greater regulation to ensure companies like OpenAI set aside resources for safety research. "There's regret where you feel guilty because you did something you knew you shouldn't have done, and there's regret where you did something you would do again in the same circumstances but it may in the end not turn out well," Hinton told reporters during the Nobel Prize announcement ceremony, noting that he identifies with the second case but is worried that the overall consequence of A.I. "might be systems more intelligent than us that eventually take control." With a Nobel Prize under his belt, Hinton said he expects the award to give more credence to his warnings. "Hopefully it will make me more credible when I say these things."
[51]
Nobel physics prize awarded to 'godfather of AI' who warned the technology could end humanity
The 2024 Nobel prize for physics has been awarded to two scientists who laid "the foundation" for artificial intelligence - although one of them recently warned the technology could be the end of humanity. John Hopfield from Princeton University and Geoffrey Hinton from the University of Toronto spent decades developing our knowledge of artificial neural networks, which are the basis of a lot of modern artificial intelligence. Artificial neural networks are inspired by the human brain. Just as we learn by strengthening or weakening the connections between synapses, machines can learn by strengthening or weakening the connections between nodes. Professor Hopfield and Professor Hinton, who has been described as the "godfather of AI", developed artificial neural networks that helped "initiate the current explosive development of machine learning," according to the awarding body, the Royal Swedish Academy of Sciences. But despite his work advancing the technology, Professor Hinton made waves last year when he stepped down from Google in 2023 because of his concerns about AI. In an interview with the New York Times, he said he sometimes regretted his life's work, telling the newspaper: "It is hard to see how you can prevent the bad actors from using it for bad things". He even warned the technology could pose a threat to humanity because the machines often learn unexpected behaviour from the vast amounts of data they analyse. Read more from science and tech: Loyalty testers will now catch out your cheating boyfriend Lab-grown food may be a step closer to being approved in UK They will share a prize of 11 million Swedish kroner (around £810,000). "This year's two Nobel Laureates in physics have used tools from physics to develop methods that are the foundation of today's powerful machine learning," said the academy in a statement. "Machine learning based on artificial neural networks is currently revolutionising science, engineering and daily life." The Nobel prizes are considered some of the most prestigious awards in the world and were created in the will of Alfred Nobel, a Swedish scientist who invented dynamite. Albert Einstein and Niels Bohr, who helped modern understanding of atomic structure, both received the Nobel prize for physics in the past. Last year it was awarded to Pierre Agostini, Ferenc Krausz and Anne L'Huillier for their work in creating ultra-short pulses of light that can show changes within atoms, potentially improving the detection of diseases. Physics is the second Nobel to be awarded this week. Yesterday, two American scientists who discovered how "microRNA" controls the decoding of genetic information in living organisms received the Nobel prize for medicine.
[52]
Geoffrey Hinton and John Hopfield share Nobel Prize for work on AI
The Academy listed some of the crucial applications of the two scientists' work, including improving climate modelling, development of solar cells, and analysis of medical images. Prof Hinton's pioneering research on neural networks paved the way for current AI systems like ChatGPT. In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would. This is called deep learning. Prof Hinton said his work on artificial neural networks was revolutionary. "It's going to be like the Industrial Revolution - but instead of our physical capabilities, it's going to exceed our intellectual capabilities," he said. But he said he also had concerns about the future. He was asked if he regretted his life's work as he told journalist last year. In reply, he said he would do the same work again, "but I worry that the overall consequences of this might be systems that are more intelligent than us that might eventually take control". He also said he uses the AI chatbot ChatGPT4 for many things now but with the knowledge that it does not always get the answer right. Professor John Hopfield invented a network that can save and recreate patterns. It uses physics that describes a material's characteristics due to atomic spin. In a similar way to how the brain tries to recall words by using associated but incomplete words, Prof Hopfield developed a network that can use incomplete patterns to find the most similar. The Nobel Prize committee said the two scientists' work has become part of our daily lives, including in facial recognition and language translation. But Ellen Moons, chair of the Nobel Committee for Physics, said "its rapid development has also raised concerns about our future collectively". The winners share a prize fund worth 11m Swedish kronor (£810,000).
[53]
Former CMU Faculty Geoffrey Hinton Awarded 2024 Nobel Prize in Physics
The Royal Swedish Academy of Sciences today awarded the 2024 Nobel Prize in Physics(opens in new window) to John J. Hopfield of Princeton University and Geoffrey E. Hinton of the University of Toronto in recognition of their foundational work in machine learning with artificial neural networks. Inspired by the human brain, artificial neural networks are computing systems used to process data and learn from it. Hinton served on the Computer Science Department faculty at Carnegie Mellon University from 1982-87. "The Nobel Prize is one of the most significant and cherished public recognitions of researchers today," said CMU President Farnam Jahanian(opens in new window). "Our extended Carnegie Mellon University community is extraordinarily proud to see Geoffrey Hinton's talents and pioneering research celebrated in such a meaningful way and grateful for his many scholarly contributions to computer science, AI and society." At CMU, he co-authored an influential paper on the backpropagation algorithm, which allows neural networks to discover their own internal representations of data. He demonstrated that the algorithm enabled neural networks to solve problems previously thought to be beyond their reach. The prize announcement also cites work Hinton did on Boltzmann machines with Terrence Sejnowksi, then at Johns Hopkins University. Later, at the University of Toronto, Hinton and his students made improvements to convolutional neural networks that cut error rates for object recognition in half, reshaping the field of computer vision. "The laureates' work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties," said Ellen Moons, chair of the Nobel Committee for Physics. At the University of Toronto, Hinton advised Ruslan Salakhutdinov(opens in new window) as he pursued his Ph.D. from 2005-09. Now the UPMC Professor of Computer Science in CMU's Machine Learning Department (MLD), Salakhutdinov cited Hinton as his key influence. "I wouldn't be where I am today without Geoff and his guidance," Salakhutdinov said. "Geoff basically discovered this unique algorithm that could train these deep networks efficiently. This laid the groundwork for a lot of the deep learning models and architecture, and it inspired a lot of people to start looking into it - it was all driven by Geoff." Salakhutdinov continues to work on generative models at Carnegie Mellon, including large language models, AI agents, deep learning and decision making. He said that Hinton thought fondly of CMU. "When he was at CMU, it was amazing. He would go to CMU and see all of these students and researchers in the labs working hard and believe that they were creating the future, and that unique environment was something that he loved," Salakhutdinov said. As the director of MLD at CMU, Zico Kolter is guiding the School of Computer Science through the research revolution brought on by generative artificial intelligence tools. His own research revolves around machine learning, optimization and control, with much of the work centered on making deep learning algorithms safer, more robust and more modular. Kolter previously demonstrated how it's possible to circumvent the safeguards of large language models. "The field Geoff helped spawn -- deep learning -- has become one of the biggest things in our society," Kolter said. "Almost all modern AI systems are based on deep learning. Geoff was foundational to deep learning." Hinton is an ACM Turing Award winner (along with Yoshua Bengio and Yann LeCun for their revolutionary work on deep neural networks) and received Carnegie Mellon's 2021 Dickson Prize in Science(opens in new window). Born 1947 in London, he received his Ph.D. in 1978 from The University of Edinburgh. The Nobel Prize comes with an award of 11 million Swedish Kronor, or $1 million, split between the winners.
[54]
'Godfather of AI' wins Nobel Prize
Stockholm | US scientist John Hopfield and British-Canadian Geoffrey Hinton won the 2024 Nobel Prize in Physics for discoveries and inventions that laid the foundation for machine learning, the award-giving body said on Tuesday. Hinton has been widely credited as a godfather of artificial intelligence and made headlines when he quit his job at Google last year to be able to more easily speak about the dangers of the technology he had pioneered.
[55]
Nobel Prize in physics: Hopfield, Hinton honored for AI breakthroughs
Hinton, of the University of Toronto, is a Canadian-British computer scientist and cognitive psychologist. Between the lines: Hinton, who left his job at Google last year in part to be vocal about his concerns about the technology he helped create, is clear-eyed about the risks, the New York Times reports. What they're saying: "It will be comparable with the Industrial Revolution. Instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us," Hinton said on a call with reporters.
[56]
Nobel Prize goes to scientists' work on machine learning
The Nobel Prize in Physics has been awarded to two scientists, John Hopfield and Geoffrey Hinton, for their work on machine learning. The announcement was made by the Royal Swedish Academy of Sciences at a press conference in Stockholm, Sweden. Machine learning is key to artificial intelligence as it develops how a computer can train itself to generate information. It drives a vast range of technology that we use today from how we search the internet to editing photographs on our phones. The winners share a prize fund worth 11m Swedish kronor (£810,000) This breaking news story is being updated and more details will be published shortly. Please refresh the page for the fullest version. You can receive Breaking News on a smartphone or tablet via the BBC News App. You can also follow @BBCBreaking on Twitter to get the latest alerts.
[57]
AI Pioneers Hopfield & Hinton Win Nobel Prize: Explore Groundbreaking AI discovery
That algorithm is critical in training deep neural networks because it lets these systems identify the information learned from data through error correction. Thus, it is a crucial component in complex tasks such as object identification from images and the perception of spoken language. The works of Hopfield and Hinton transformed the approach to machine learning through the discoveries of the above-mentioned foundational tools that empower AI applications at present. The results of the research in the 1980s undertaken by the duo have kindled breakthroughs in other connected fields including: Health: Hinton's research made it possible to diagnose diseases with images and outcomes in patients as well as individualized treatment plans with huge datasets. Autonomous Driving: Neural networks enable self-driving cars to perceive information from real-time sensors, detect objects, and make informed decisions in very complex environments. The advancements in this field can be attributed to the Hopfield Network Associative Memory Model. Natural Language Processing: Their contributions have enabled the development of systems that understand and generate human language, helping innovations in virtual assistants, translations, and customer support chatbots.
[58]
AI Godfather Geoffrey Hinton Wins Nobel Prize For Artificial Intelligence Work: 'Work Has Already Been Of The Greatest Benefit' - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
Geoffrey Hinton is considered the Godfather of AI and recognized for his early work in the sector. Geoffrey Hinton, known as the "Godfather of AI," won a Nobel Prize in Physics for his work in artificial intelligence. What Happened: Hinton and Princeton professor John Hopfield are the joint winners of the Nobel Prize in Physics for 2024 thanks to their early work in AI and deep learning dating back to the 1970s and 1980s. The awards, announced by the Royal Swedish Academy of Sciences, say both Hinton and Hopfield helped with "foundational discoveries and inventions that enable machine learnings with artificial neural networks," as reported by TechCrunch. Hinton, currently a professor at the University of Toronto, developed an algorithm that helps neural networks learn from their mistakes. His work helped transform how today's AI models are trained to operate. The recognition of Hinton and Hopfield as Nobel Prize winners could strengthen AI's position as a growth sector moving forward. Read Also: Terminator In Real Life? AI 'Godfather' Warns Of Human 'Wipe Out' In 5 To 20 Years As winners of the Nobel Prize, Hinton and Hopfield will be referred to as "laureates" and receive gold medals, diplomas, and split a cash prize of around $1 million. "The laureates' work has already been of the greatest benefit," Nobel Committee for Physics chair Ellen Moons said. "In physics, we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties." Why It's Important: With his early learnings and development in the space, Hinton is often considered one of the top voices in AI and deep learning. Hinton's company DNNresearch was acquired by Google, a unit of Alphabet Inc GOOGGOOGL in 2013. After joining Google as part of the acquisition, Hinton quit the technology giant last year. Hinton publicly voices concerns about the inherent risks of AI technology and the spreading of misinformation. Hopfield developed the Hopfield network, which showed how neural networks can store patterns. Like Hinton, Hopfield's early work in the AI space is considered groundbreaking. Read Next: Geoffrey Hinton Says '50-50 Chance' AI Technology Will Get 'More Intelligent Than Us In The Next 20 Years' Image: Vaughn Ridley/Collision via Sportsfile/Wikimedia Commons Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
The 2024 Nobel Prize in Physics was awarded to John Hopfield and Geoffrey Hinton for their groundbreaking work in artificial neural networks, which laid the foundation for modern machine learning and AI.
The 2024 Nobel Prize in Physics has been awarded to John Hopfield and Geoffrey Hinton "for foundational discoveries and inventions that enable machine learning with artificial neural networks" 1. This unexpected decision by the Nobel Committee has sparked discussions about the intersection of physics and artificial intelligence.
John Hopfield, a theoretical physicist, and Geoffrey Hinton, a computer scientist, were recognized for their seminal work in the 1980s that laid the groundwork for modern AI systems 2. Their research drew heavily on concepts from physics, particularly statistical mechanics and the behavior of complex systems.
Hopfield's contribution came in 1982 with the development of Hopfield networks, a type of recurrent neural network inspired by concepts from neurobiology and molecular physics 3. These networks demonstrated how computers could use interconnected nodes to store and recall information, mimicking associative memory in biological systems 4.
Hinton, along with colleagues, expanded on Hopfield's work by introducing the Boltzmann machine in 1985 2. This more complex neural network, named after physicist Ludwig Boltzmann, incorporated concepts from statistical mechanics and introduced "hidden units," allowing for more generalized understanding and pattern recognition 5.
Hinton also played a crucial role in developing backpropagation, a key algorithm for training neural networks by adjusting connection weights based on performance 3. This breakthrough enabled the training of multi-layered networks, paving the way for deep learning techniques used in modern AI applications.
The work of Hopfield and Hinton has had far-reaching implications, forming the basis for many current AI technologies. Their contributions have enabled advancements in image and speech recognition, natural language processing, and generative AI systems like ChatGPT 2.
Moreover, machine learning techniques have become invaluable tools in scientific research. For example, they played a crucial role in the discovery of the Higgs boson, demonstrating the symbiotic relationship between physics and AI 2.
While the Nobel Prize recognizes the groundbreaking nature of their work, both laureates have expressed mixed feelings about the rapid advancement of AI. Hinton, in particular, has voiced concerns about the potential risks associated with increasingly intelligent systems, calling for proactive regulation 1.
As AI continues to evolve and impact various sectors of society, the recognition of Hopfield and Hinton's work by the Nobel Committee underscores the critical role of interdisciplinary research in driving technological innovation. It also highlights the growing importance of AI in scientific inquiry and the need for continued exploration of its implications for humanity.
Reference
[1]
[2]
IEEE Spectrum: Technology, Engineering, and Science News
|Why the Nobel Prize in Physics Went to AI Research[4]
John J. Hopfield and Geoffrey E. Hinton receive the 2024 Nobel Prize in Physics for their groundbreaking work in artificial neural networks, which laid the foundation for modern machine learning and AI.
3 Sources
The 2024 Nobel Prizes in Physics and Chemistry recognize AI breakthroughs, igniting discussions about the evolving nature of scientific disciplines and the need to modernize Nobel categories.
48 Sources
The 2024 Nobel Prizes in Physics and Chemistry recognize AI contributions, sparking discussions about the future role of AI in scientific discoveries and its potential to win a Nobel Prize autonomously.
5 Sources
The 2024 Nobel Prize in Chemistry recognizes the groundbreaking work in AI-driven protein structure prediction and computational protein design, marking a significant milestone in the intersection of artificial intelligence and biochemistry.
61 Sources
As the Nobel Chemistry Prize announcement approaches, experts speculate on potential winners, with AI-aided research and new materials development at the forefront. The prize follows recent recognition of AI breakthroughs in the physics category.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved