Curated by THEOUTPOST
On Tue, 8 Oct, 4:10 PM UTC
48 Sources
[1]
AI Comes to the Nobels: Double Win Sparks Debate about Scientific Fields
Nobel committees recognized the transformative power of artificial intelligence (AI) in two of this year's prizes -- honouring pioneers of neural networks in the physics prize, and the developers of computational tools to study and design proteins in the chemistry prize. But not all researchers are happy. Moments after the Royal Swedish Academy of Sciences unveiled the winners of this year's physics Nobel, social media lit up, with several physicists arguing that the science underlying machine learning, celebrated in the awards to Geoffrey Hinton and John Hopfield, was not actually physics. "I'm speechless. I like machine learning and artificial neural networks as much as the next person, but hard to see that this is a physics discovery," Jonathan Pritchard, an astrophysicist at Imperial College London wrote on X. "Guess the Nobel got hit by AI hype." If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. The research by Hinton, at the University of Toronto in Canada, and Hopfield at Princeton University in New Jersey, "falls into the field of computer science," says Sabine Hossenfelder, a physicist at the Munich Center for Mathematical Philosophy in Germany. "The annual Nobel Prize is a rare opportunity for physics -- and physicists with it -- to step into the spotlight. It's the day when friends and family remember they know a physicist and maybe go and ask him or her what this recent Nobel is all about. But not this year." Not everyone was troubled, however: many physicists welcomed the news. "Hopfield and Hinton's research was interdisciplinary, bringing together physics, math, computer science and neuroscience," says Matt Strassler, a theoretical physicist at Harvard University in Cambridge, Massachusetts. "In that sense, it belongs to all of these fields." Anil Ananthaswamy, a science writer based in Berkeley, California, and author of the book Why Machines Learn, points out that although the research cited by the Nobel committee might not be theoretical physics in the purest sense, it is rooted in techniques and concepts from physics, such as energy. The 'Boltzmann networks' invented by Hinton and the Hopfield networks "are both energy-based models", he says. The connection with physics became more tenuous in subsequent developments in machine learning, Ananthaswamy adds, particularly in the 'feed-forward' techniques that made neural networks easier to train. But physics ideas are making a comeback, and are helping researchers understand why the increasingly complex deep-learning systems do what they do. "We need the way of thinking we have in physics to study machine learning," says Lenka Zdeborová, who studies the statistical physics of computation at the Swiss Federal Institute of Technology in Lausanne. "I think that the Nobel prize in physics should continue to spread into more regions of physics knowledge," says Giorgio Parisi, a physicist at the Sapienza University of Rome who shared the 2021 Nobel. "Physics is becoming wider and wider, and it contains many areas of knowledge that did not exist in the past, or were not part of physics." Computer science seemed to be completing its Nobel take-over the day after the physics prize announcement, when Demis Hassabis and John Jumper, co-creators of the protein-folding prediction AI tool AlphaFold at Google DeepMind in London, won half of the chemistry Nobel. (The other half was awarded to David Baker at the University of Washington in Seattle for protein-design work that did not employ machine learning). The prize was a recognition of the disruptive force of AI, but also of the steady accumulation of knowledge in structural and computational biology, says David Jones, a bioinformatician at University College London, who collaborated with DeepMind on the first version of AlphaFold. "I don't think AlphaFold involves any radical change in the underlying science that wasn't already in place," he says. "It's just how it was put together and conceived in such a seamless way that allowed AlphaFold to reach those heights." For example, one key input AlphaFold uses is the sequences of related proteins from different organisms, which can identify amino acid pairs that have tended to co-evolve and therefore might be in close physical proximity in a protein's 3D structure. Researchers were already using this insight to predict protein structures at the time AlphaFold was developed, and some even began embedding the idea in deep learning neural networks. "It wasn't just that we went to work and we pressed the AI button, and then we all went home," Jumper said at a press briefing at DeepMind on 9 October. "It was really an iterative process where we developed, we did research, we tried to find the right kind of combinations between what the community understood about proteins and how do we build those intuitions into our architecture." AlphaFold also would not have been possible were it not for the Protein Data Bank, a freely available repository of more than 200,000 protein structures -- including some that have contributed to previous Nobels -- determined using X-ray crystallography, cryo-electron microscopy and other experimental methods. "Each data point is years of effort from someone," Jumper said. Since they were first awarded in 1901, the Nobels have often been about the impact of research on society, and have rewarded practical inventions, not only pure science. In this respect, the 2024 prizes are not outliers, says Ananthaswamy. "Sometimes they are given for very good engineering projects. That includes the prizes for lasers and PCR."
[2]
AI comes to the Nobels: double win sparks debate about scientific fields
Nobel committees recognized the transformative power of artificial intelligence (AI) in two of this year's prizes -- honouring pioneers of neural networks in the physics prize, and the developers of computational tools to study and design proteins in the chemistry prize. But not all researchers are happy. Moments after the Royal Swedish Academy of Sciences unveiled the winners of this year's physics Nobel, social media lit up, with several physicists arguing that the science underlying machine learning, celebrated in the awards to Geoffrey Hinton and John Hopfield, was not actually physics. "I'm speechless. I like machine learning and artificial neural networks as much as the next person, but hard to see that this is a physics discovery," Jonathan Pritchard, an astrophysicist at Imperial College London wrote on X. "Guess the Nobel got hit by AI hype." The research by Hinton, at the University of Toronto in Canada, and Hopfield at Princeton University in New Jersey, "falls into the field of computer science," says Sabine Hossenfelder, a physicist at the Munich Center for Mathematical Philosophy in Germany. "The annual Nobel Prize is a rare opportunity for physics -- and physicists with it -- to step into the spotlight. It's the day when friends and family remember they know a physicist and maybe go and ask him or her what this recent Nobel is all about. But not this year." Not everyone was troubled, however: many physicists welcomed the news. "Hopfield and Hinton's research was interdisciplinary, bringing together physics, math, computer science and neuroscience," says Matt Strassler, a theoretical physicist at Harvard University in Cambridge, Massachusetts. "In that sense, it belongs to all of these fields." Anil Ananthaswamy, a science writer based in Berkeley, California and author of the book Why Machines Learn, points out that although the research cited by the Nobel committee might not be theoretical physics in the purest sense, it is rooted in techniques and concepts from physics, such as energy. The 'Boltzmann networks' invented by Hinton and the Hopfield networks "are both energy-based models", he says. The connection with physics became more tenuous in subsequent developments in machine learning, Ananthaswamy adds, particularly in the 'feed-forward' techniques that made neural networks easier to train. But physics ideas are making a comeback, and are helping researchers understand why the increasingly complex deep-learning systems do what they do. "We need the way of thinking we have in physics to study machine learning," says Lenka Zdeborová, who studies the statistical physics of computation at the Swiss Federal Institute of Technology in Lausanne (EPFL). "I think that the Nobel prize in physics should continue to spread into more regions of physics knowledge," says Giorgio Parisi, a physicist at the Sapienza University of Rome who shared the 2021 Nobel. "Physics is becoming wider and wider, and it contains many areas of knowledge that did not exist in the past, or were not part of physics." Computer science seemed to be completing its Nobel take-over the day after the physics prize announcement, when Demis Hassabis and John Jumper, co-creators of the protein-folding prediction AI tool AlphaFold at Google DeepMind in London, won half of the chemistry Nobel. (The other half was awarded to David Baker at the University of Washington in Seattle for protein-design work that did not employ machine learning). The prize was a recognition of the disruptive force of AI, but also of the steady accumulation of knowledge in structural and computational biology, says David Jones, a bioinformatician at University College London, who collaborated with DeepMind on the first version of AlphaFold. "I don't think AlphaFold involves any radical change in the underlying science that wasn't already in place," he says. "It's just how it was put together and conceived in such a seamless way that allowed AlphaFold to reach those heights." For example, one key input AlphaFold uses is the sequences of related proteins from different organisms, which can identify amino acid pairs that have tended to co-evolve and therefore might be in close physical proximity in a protein's 3D structure. Researchers were already using this insight to predict protein structures at the time AlphaFold was developed, and some even began embedding the idea in deep learning neural networks. "It wasn't just that we went to work and we pressed the AI button, and then we all went home," Jumper said at a press briefing at DeepMind on 9 October. "It was really an iterative process where we developed, we did research, we tried to find the right kind of combinations between what the community understood about proteins and how do we build those intuitions into our architecture." AlphaFold also would not have been possible were it not for the Protein Data Bank, a freely available repository of more than 200,000 protein structures -- including some that have contributed to previous Nobels -- determined using X-ray crystallography, cryo-electron microscopy and other experimental methods. "Each data point is years of effort from someone," Jumper said. Since their inception in 1901, the Nobels have often been about the impact of research on society, and have rewarded practical inventions, not only pure science. In this respect, the 2024 prizes are not outliers, says Ananthaswamy. "Sometimes they are given for very good engineering projects. That includes the prizes for lasers and PCR."
[3]
A Shift in the World of Science
What this year's Nobels can teach us about science and humanity. By Alan Burdick and Katrina Miller We are journalists on The Times's Science desk. Technology observers have grown increasingly vocal in recent years about the threat that artificial intelligence poses to the human variety. A.I. models can write and talk like us, draw and paint like us, crush us at chess and Go. They express an unnerving simulacrum of creativity, not least where the truth is concerned. A.I. is coming for science, too, as this week's Nobel Prizes seemed keen to demonstrate. On Tuesday, the Nobel Prize in Physics was awarded to two scientists who helped computers "learn" closer to the way the human brain does. A day later, the Nobel Prize in Chemistry went to three researchers for using A.I. to invent new proteins and reveal the structure of existing ones -- a problem that stumped biologists for decades, yet could be solved by A.I. in minutes. Cue the grousing: This was computer science, not physics or chemistry! Indeed, of the five laureates on Tuesday and Wednesday, arguably only one, the University of Washington biochemist David Baker, works in the field he was awarded in. The scientific Nobels tend to award concrete results over theories, empirical discovery over pure idea. But that schema didn't quite hold this year, either. One prize went to scientists who leaned into physics as a foundation on which to build computer models used for no groundbreaking result in particular. The laureates on Wednesday, on the other hand, had created computer models that made big advancements in biochemistry. These were outstanding and fundamentally human accomplishments, to be sure. But the Nobel recognition underscored a chilling prospect: Henceforth, perhaps scientists will merely craft the tools that make the breakthroughs, rather than do the revolutionary work themselves or even understand how it came about. Artificial intelligence designs and builds hundreds of molecular Notre Dames and Hagia Sophias, and a researcher gets a pat for inventing the shovel.
[4]
AI was central to two of 2024's Nobel prize categories. It's a sign of things to come
The 2024 Nobel Prizes in physics and chemistry have given us a glimpse of the future of science. Artificial intelligence (AI) was central to the discoveries honored by both awards. You have to wonder what Alfred Nobel, who founded the prizes, would think of it all. We are certain to see many more Nobel medals handed to researchers who made use of AI tools. As this happens, we may find the scientific methods honored by the Nobel committee depart from straightforward categories like "physics," "chemistry" and "physiology or medicine." We may also see the scientific backgrounds of recipients retain a looser connection with these categories. This year's physics prize was awarded to the American John Hopfield, at Princeton University, and British-born Geoffrey Hinton, from the University of Toronto. While Hopfield is a physicist, Hinton studied experimental psychology before gravitating to AI. The chemistry prize was shared between biochemist David Baker, from the University of Washington, and the computer scientists Demis Hassabis and John Jumper, who are both at Google DeepMind in the UK. There is a close connection between the AI-based advances honored in the physics and chemistry categories. Hinton helped develop an approach used by DeepMind to make its breakthrough in predicting the shapes of proteins. The physics laureates, Hinton in particular, laid the foundations of the powerful field known as machine learning. This is a subset of AI that's concerned with algorithms, sets of rules for performing specific computational tasks. Hopfield's work is not particularly in use today, but the backpropagation algorithm (co-invented by Hinton) has had a tremendous impact on many different sciences and technologies. This is concerned with neural networks, a model of computing that mimics the human brain's structure and function to process data. Backpropagation allows scientists to "train" enormous neural networks. While the Nobel committee did its best to connect this influential algorithm to physics, it's fair to say that the link is not a direct one. Training a machine-learning system involves exposing it to vast amounts of data, often from the internet. Hinton's advance ultimately enabled the training of systems such as GPT (the technology behind ChatGPT), and the AI algorithms AlphaGo and AlphaFold, developed by Google DeepMind. So, backpropagation's impact has been enormous. DeepMind's AlphaFold 2 solved a 50-year-old problem: predicting the complex structures of proteins from their molecular building blocks, amino acids. Every two years, since 1994, scientists have been holding a contest to find the best ways to predict protein structures and shapes from the sequences of their amino acids. The competition is called Critical Assessment of Structure Prediction (CASP). For the past few contests, CASP winners have used some version of DeepMind's AlphaFold. There is, therefore, a direct line to be drawn from Hinton's backpropagation to Google DeepMind's AlphaFold 2 breakthrough. David Baker used a computer program called Rosetta to achieve the difficult feat of building new kinds of proteins. Both Baker's and DeepMind's approaches hold enormous potential for future applications. Attributing credit has always been controversial aspect of the Nobel prizes. A maximum of three researchers can share a Nobel. But big advances in science are collaborative. Scientific papers may have 10, 20, 30 authors or more. More than one team might contribute to the discoveries honored by the Nobel committee. This year we may have further discussions about the attribution of the research on backpropagation algorithm, which has been claimed by various researchers, as well as for the general attribution of a discovery to a field like physics. We now have a new dimension to the attribution problem. It's increasingly unclear whether we will always be able to distinguish between the contributions of human scientists and those of their artificial collaborators -- the AI tools that are already helping push forward the boundaries of our knowledge. In the future, could we see machines take the place of scientists, with humans being consigned to a supporting role? If so, perhaps the AI tool will get the main Nobel prize with humans needing their own category.
[5]
A.I.'s Sweep Reignites Debate Over Whether It's Time to Modernize the Nobel Prizes
This year's awards have reignited debates over whether the Nobel Prizes should modernize their categories to reflect current scientific fields. A.I. is having a moment at this year's Nobel Prizes. Earlier this week, the award for physics was granted to Geoffrey Hinton and John Hopfield, two pioneering A.I. researchers who made breakthroughs in training artificial neural networks. The following day, DeepMind co-founders Demis Hassabis and John Jumper won the Nobel Prize in Chemistry, along with the American biochemist David Baker, for developing an A.I. model that's able to predict the structure of virtually all proteins. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters Some scientists, including the awardees themselves, were left shocked by A.I.'s dominance this year. Hinton said he was "flabbergasted" by the honor and noted in a subsequent press conference that he didn't necessarily consider himself a physicist. "I have very high respect for physics. I dropped out of physics after my first year at university because I couldn't do the complicated math, so getting an award in physics was very surprising to me," he said. To some, the new technology's presence at the prestigious scientific prizes displayed the sheer force of its impact on all fields, including science. While Huimin Zhao, chair of Chemical and Biomolecular Engineering at the University of Illinois Urbana-Champaign, told Observer he was "very surprised" to learn about the A.I. focus of the chemistry prize, the scientist described Hassabis and Jumper's AlphaFold A.I. program as "revolutionary." The 2020 breakthrough from the duo solved a problem scientists had been working on for 50 years and is arguably "the biggest application of A.I. in the science domain," said Zhao. The Nobel Prize in Physics, which recognized Hinton and Hopfield for decades of work that laid the foundation for today's A.I. revolution, is an award that "celebrates interdisciplinary," said Michael Moloney, CEO of the American Institute of Physics, in a statement. Honoring how physics has driven the development of computational algorithms, it also "demonstrates that fundamental shifts in our scientific understanding can sometimes take decades to have wider impact," he said. Is it time for the Nobel Prizes to update its categories? Despite the achievements of the new Nobel Prize laureates, this year's awards have reignited debates over whether the Nobel Prizes should modernize their categories to reflect current scientific fields. Hinton himself noted in an interview that his work would be more appropriate for a Nobel Prize in computer science. Hassabis, meanwhile, pointed to the fact that the concept of computer science didn't even exist when the awards were first created more than 100 years ago. The prizes have remained in accordance with the wishes set out in the 1895 will of Swedish inventor Alfred Nobel, who requested that his estate be used to fund annual prizes in physics, chemistry, physiology and medicine, peace and literature. The Nobel Foundation, which manages Nobel's fortune, has given no indication that it is willing to change these categories and describes the 1968 addition of an award for economic sciences as "an exception." Unlike the other awards, the economics prize is funded by Sveriges Riksbank -- Sweden's central bank -- and is not an official Nobel Prize but instead labeled a prize "in Memory of Alfred Nobel." Its establishment coincided with the celebration of Sveriges Riksbank's 300th anniversary, said the bank in a statement to Observer. This isn't the first time the scientific community has urged the Nobel Prizes to add new honors. In 2009, a group of researchers and academics including Tim Hunt, a 2001 Nobel Prize laureate in physiology and medicine, wrote a letter to the Nobel Foundation requesting it adapt to recognize emerging disciplines like global environment and public health, fundamental biology and behavioral science. "We appreciate that the foundation is bound by Nobel's will. But we also note that the foundation has shown flexibility in the past," read the letter, which didn't result in any new additions. While A.I.'s sweep at this year's awards is unlikely to spur any significant changes in the prizes themselves, researchers say they will give the new technology's capabilities across science further credibility. The back-to-back prizes in physics and chemistry will inspire more scientists to utilize A.I., according to Zhao. "Now, we've started to realize the potential of A.I. for scientific discoveries," he said. "I would say this is the beginning."
[6]
AI steps into science limelight with Nobel wins
For long periods of its history, artificial intelligence has lurked in the hinterland of science, often unloved and unfunded -- but two Nobel prizes in one week suggest its time in the sunshine has finally arrived. First on Tuesday, Geoffrey Hinton and John Hopfield won the physics prize for their pioneering work in creating the foundations of modern AI. Then on Wednesday, David Baker, John Jumper and Demis Hassabis shared the chemistry prize for work revealing the secrets of proteins through AI. While the trio had been among the favorites for the chemistry prize, the physics one was unexpected. "I'm flabbergasted," said Hinton when he was told of the prize. "I had no idea this would happen. I'm very surprised." He wasn't the only one. Online commentators wondered why a computer scientist was getting a physics prize. And with programs such as OpenAI's ChatGPT dominating the cultural conversation around AI, for many the idea that such a technology could be worthy of such an award seemed bizarre. 'AI winter' But for scientists the news was not so surprising. "AI's role in transforming how we do science cannot be underestimated," Elena Simperl, a professor at King's College London's informatics department, told AFP. "The Nobel prize news recognizes this, while also acknowledging the role interdisciplinary methods play in advancing computing." The science now bundled together as artificial intelligence has a long history, emerging in the 1950s and 60s with rudimentary chatbots, translation machines and simple algorithms. But many of the early experiments failed to take off and researchers struggled to get funding, particularly during the 1970s and the 1990s, periods known as "AI winters". Before the latest explosion of interest prompted by ChatGPT in 2022, AI had only had a handful of moments when it pierced the public imagination. In 2016, a program called AlphaGo created by Hassabis's DeepMind beat South Korean grandmaster Lee Se-Dol at the game Go. It came almost a decade after the IBM-developed supercomputer Deep Blue beat world chess champion Garry Kasparov. In his acceptance speech, Hassabis flagged that there was a direct line between AlphaGo and AlphaFold, the program that won them the Nobel for predicting protein structures. "We used games in the early part of DeepMind to train as a proving ground for our early algorithms that then led to some of the techniques we eventually use in our modern programs," he said. And he encouraged children to play games, saying it was "just a really fun way to get into the guts of how computers work". New Nobels needed? Simperl said that, far from it being problematic to see AI pioneers being rewarded by the Nobels, it should be encouraged. "Maybe it's time for this to be recognized with a new Nobel prize category," she said. She added that disciplines like software engineering and cybersecurity also deserved recognition for their contributions to society. "There is no issue in my mind with an AI scientist being recognized in a Nobel prize scientific category," she said. "This is merely an acknowledgement of how modern science works today." Outside the science community, the conversation continues to be dominated by the astronomical valuations of AI companies and the outsize cultural influence of some of their leaders. After Wednesday's prize was announced, online memes quickly emerged suggesting Sam Altman, boss of ChatGPT-maker OpenAI, could be next in line. "It's not done yet," Sean O'Heigeartaigh, director of the AI: Futures and Responsibility Program at the University of Cambridge, wrote on the social media platform X. "Hearing reports that the Nobel prize for literature will be going to the authors of 'OpenAI's nonprofit governance structure' for outstanding contributions to creative fiction."
[7]
Is AI physics or chemistry? Nobel Prize wins spark debate
The Nobel Prize announcements this week heavily featured the use of AI, which has ignited discussion over traditional sciences. Artificial intelligence (AI) jumped into the upper echelons of scientific achievement this week, winning as part of the Nobel prizes in physics and chemistry. The awards highlight the impact the technology is making in every aspect of our lives but have also sparked debate and even outrage on social media by researchers about how AI is related to these scientific disciplines. "I'm speechless. I like machine learning and artificial neural networks as much as the next person, but hard to see that this is a physics discovery," Jonathan Pritchard, an astrophysicist at Imperial College London wrote on X. "Guess the Nobel got hit by AI hype," he added. The Nobel Prize is usually awarded for decades-old research after its impact can be assessed to have "the greatest benefit to humankind". One of the so-called 'godfathers of AI' Geoffrey Hinton and professor and physicist John Hopfield were awarded the Nobel Prize in Physics on Tuesday for their work that began in the 1980s, which drew on physics concepts to invent artificial neural networks that sparked and influenced AI's development. A day later, AI hit the Nobel headlines again after Google DeepMind CEO Demis Hassabis, DeepMind director John Jumper and University of Washington professor David Baker jointly won the Chemistry prize for their work on proteins. Baker was praised for his work on an AI-driven protein prediction tool called RoseTTAFold and his work developing new proteins. Hassabis and Jumper, meanwhile, developed an AI system that cracked a 50-year problem of predicting the structure of a protein. "It's far too premature to talk about AI being involved in all prizes," said Hassabis at a press conference. "The human ingenuity comes in first - asking the question, developing the hypothesis - and AI systems can't do any of that. It just sort of analyses data right now," he said, adding that it was "interesting the committee decided to make a statement" by having the two AI-linked prizes. "My first reaction was that we are not taking seriously enough what AI is," said David Vivancos, CEO of the deep learning and AI organisation MindBigData.com. "I am a big admirer of [Hinton and Hopfield] and they have created a marvellous discovery. But the thing is, it's not in the realm of physics unless we think that physics is everything," he told Euronews Next. According to the Oxford English Dictionary, physics is described as "the branch of science concerned with the nature and properties of non-living matter and energy," meaning something that is physically there. Vivancos explained his view that physics "is tied to something physical, it is something real," while in terms of how AI behaves "it is more related to something that is happening in the mind of the computer instead of a physical being". But could AI be related to chemistry? In the case of the Nobel Prizes, arguably it can because it is computational chemistry, which uses computer simulation to assist in solving complex chemical problems. "The use of AI to predict protein structure is a huge advance with a myriad of uses in biology, medicine, and beyond," Andy Cooper, a professor of chemistry and director of the Materials Innovation Factory and Leverhulme Centre for Functional Materials Design at the University of Liverpool, said in a statement. "AI will impact other areas of chemistry, too, but the protein field has some special features. "First, there is a large amount of well-curated training data. Second, proteins are structurally complex but compositionally fairly simple -- they are built from a quite small selection of building blocks, " he added. There are different types of AI, such as generative, but broadly speaking AI is defined as a technology that enables computers and machines to simulate human learning, comprehension, and problem-solving. If AI should fit into a traditional science, then perhaps it is closer to mathematics, said Vivancos. But he also argued it could be biology or neuroscience due to the algorithms which can help researchers comb through vast genetic data libraries. AI can fit into most categories and will arguably impact all of our lives on some level. For example, the technology has also been used to manage real-time road traffic, navigation apps like Google Maps, and in everyday items such as robot vacuum cleaners. The Nobel prizes in chemistry and physics being awarded to the use of AI shows the technology's "triumph of interdisciplinarity," said Virginia Dignum, a professor in the computing science department at Umeå University in Sweden who also leads the university's Social and Ethical Artificial Intelligence research group. "The real breakthroughs in science are no longer the domain of a single discipline but require a broad perspective and the combination of different insights," she told Euronews Next. "AI is here an accelerator and a support for the exploration of great spaces of research, a tool that can analyse vast datasets, predict outcomes, and even suggest new hypotheses, enhancing the capabilities of researchers in fields like biology, physics, chemistry, and medicine," she added. But Dignum said that the awards are not about AI's place in the Nobel Prizes, it is about the open-mindedness and "respectful attitude of scientists" and how much can be learned. "Maybe it is time to modernise the Nobel Prizes to recognise that the discoveries that really matter are beyond the traditional division in disciplines," Dignum said.
[8]
Nobel Prizes highlight the intersection of AI and fundamental research
This year's Nobel Prizes highlight AI's significant role in basic research, illustrating how computing blurs scientific boundaries. Winners used AI to mimic the human brain, model associative memory, and predict protein structures, emphasizing the intersection of physics, chemistry, and AI amid challenges in AI training and implementation.Most of us tend to see AI in purely 'tech' terms. But this year's Nobel Prizes in physics and chemistry underline the importance of the 'science' behind the tech. They have also brought to the fore the role of AI in basic research, while also muddying the disciplinary boundaries among the branches of scientific inquiry. John Hopfield used principles of physics to mimic the human brain in artificial neural networks. Geoffrey Hinton used statistical modelling and psychology to recreate how associative memories are made because connections are reinforced when pieces of a network work together. The pair, considered among the godfathers of AI, won the physics prize. Then, in chemistry, Demis Hassabis and John Jumper won the Nobel for having utilised AI to predict the structure of almost all known proteins. The physics prize, in effect, went to work on computing and mathematics, while the chemistry prize was derived through computing. Hopfield is a physicist whose area of work has been microbiology. Hinton has worked in psychology and quit Google to be able to air his concern over AI. Hassabis has been a chess player and a computer programmer before turning his attention to biochemistry. AI is, thus, blurring the distinctions among areas of scientific inquiry where computing and data are figuring out solutions to problems where maths is beyond human capacity. This could lead to a combination of two outcomes in fundamental research. One would be to use AI to make incremental hops in our understanding of nature. The other would be to ask even more difficult questions and then deploy the computing technology that helps arrive at an answer. The caveat here is AI has to be trained properly to be able to deliver the research outcomes we are seeking. At each stage of problem-solving, the programming needs will mount. AI, as it is now, is not a very good expert at everything. It hallucinates, incapable of telling truth from falsehood, an imprecise tool at best. Humans will need their eureka moments for quite some time longer.
[9]
AI steps into science limelight with Nobel wins
Paris (AFP) - For long periods of its history, artificial intelligence has lurked in the hinterland of science, often unloved and unfunded -- but two Nobel prizes in one week suggest its time in the sunshine has finally arrived. First on Tuesday, Geoffrey Hinton and John Hopfield won the physics prize for their pioneering work in creating the foundations of modern AI. Then on Wednesday, David Baker, John Jumper and Demis Hassabis shared the chemistry prize for work revealing the secrets of proteins through AI. While the trio had been among the favourites for the chemistry prize, the physics one was unexpected. "I'm flabbergasted," said Hinton when he was told of the prize. "I had no idea this would happen. I'm very surprised." He wasn't the only one. Online commentators wondered why a computer scientist was getting a physics prize. And with programs such as OpenAI's ChatGPT dominating the cultural conversation around AI, for many the idea that such a technology could be worthy of such an award seemed bizarre. 'AI winter' But for scientists the news was not so surprising. "AI's role in transforming how we do science cannot be underestimated," Elena Simperl, a professor at King's College London's informatics department, told AFP. "The Nobel prize news recognises this, while also acknowledging the role interdisciplinary methods play in advancing computing." The science now bundled together as artificial intelligence has a long history, emerging in the 1950s and 60s with rudimentary chatbots, translation machines and simple algorithms. But many of the early experiments failed to take off and researchers struggled to get funding, particularly during the 1970s and the 1990s, periods known as "AI winters". Before the latest explosion of interest prompted by ChatGPT in 2022, AI had only had a handful of moments when it pierced the public imagination. In 2016, a program called AlphaGo created by Hassabis's DeepMind beat South Korean grandmaster Lee Se-Dol at the game Go. It came almost a decade after the IBM-developed supercomputer Deep Blue beat world chess champion Garry Kasparov. In his acceptance speech, Hassabis flagged that there was a direct line between AlphaGo and AlphaFold, the program that won them the Nobel for predicting protein structures. "We used games in the early part of DeepMind to train as a proving ground for our early algorithms that then led to some of the techniques we eventually use in our modern programs," he said. And he encouraged children to play games, saying it was "just a really fun way to get into the guts of how computers work". - New Nobels needed? - Simperl said that, far from it being problematic to see AI pioneers being rewarded by the Nobels, it should be encouraged. "Maybe it's time for this to be recognised with a new Nobel prize category," she said. She added that disciplines like software engineering and cybersecurity also deserved recognition for their contributions to society. "There is no issue in my mind with an AI scientist being recognised in a Nobel prize scientific category," she said. "This is merely an acknowledgement of how modern science works today." Outside the science community, the conversation continues to be dominated by the astronomical valuations of AI companies and the outsize cultural influence of some of their leaders. After Wednesday's prize was announced, online memes quickly emerged suggesting Sam Altman, boss of ChatGPT-maker OpenAI, could be next in line. "It's not done yet," Sean O'Heigeartaigh, director of the AI: Futures and Responsibility Programme at the University of Cambridge, wrote on the social media platform X. "Hearing reports that the Nobel prize for literature will be going to the authors of 'OpenAI's nonprofit governance structure' for outstanding contributions to creative fiction."
[10]
How the AI Nobel Prizes Could Change the Focus of Research
It has been a billboard week for artificial intelligence research. But could big wins for Demis Hassabis and Geoffrey Hinton change broader scientific incentives? Demis Hassabis didn't know he was getting the Nobel Prize in chemistry from the Royal Swedish Academy of Sciences until his wife started being bombarded with calls from a Swedish number on Skype. "She would put it down several times, and then they kept persisting," Hassabis said today in a press conference convened to celebrate the awarding of the prize, alongside John Jumper, his colleague at Google DeepMind. "Then I think she realized it was a Swedish number, and they asked for my number." That he won the prize -- the most prestigious in science -- may not have been all that much of a shock: A day earlier, Geoffrey Hinton, often called one of the "godfathers of AI," and Princeton University's John Hopfield were awarded the Nobel Prize in physics for their work on machine learning. "Obviously the committee decided to kind of make a statement, I guess, when having the two together," said Hassabis in a press conference organized after his win. In case it wasn't clear: AI is here, and it's now possible to win a Nobel Prize by studying it and contributing to other fields -- whether physics in the case of Hinton and Hopfield or chemistry in the case of Hassabis and Jumper, who won alongside David Baker, a University of Washington genome scientist. "It's no doubt a huge 'AI in science' moment," says Eleanor Drage, senior research fellow at the University of Cambridge's Leverhulme Center for the Future of Intelligence. "Going by highly accomplished and illustrious computer scientists winning a chemistry prize and a physics prize, we're all bracing for who will be awarded a peace prize," she says, explaining that colleagues in her office were joking about xAI owner Elon Musk being tipped for that award. Drage calls the awarding of physics and chemistry prizes to AI researchers "a major polemic, not only within those disciplines, but looking in from the outside." She suggests the awards could be for one of two reasons: either a notable shift in disciplinary boundaries enabled by the ubiquity of AI in academic research, or because "we're so obsessed with computer scientists that we're willing to slot them in anywhere." She isn't sure which route this week's decisions signify. But she and others are sure that it'll make a meaningful difference to the future of research. "Winning a Nobel by using AI may be a ship that's sailed, but it will influence research directions," says Matt Hodgkinson, an independent scientific research integrity specialist and former research integrity manager at the UK Research Integrity Office. The question is whether it'll influence them in the right way. Baker, one of this year's winners of the Nobel Prize for chemistry, has long been one of the leading researchers in the use of AI for protein-structure prediction. He had been laboring away for decades at the problem, making incremental gains, recognizing that the well-defined problem and format of protein structure made it a useful test bed for AI algorithms. This wasn't a fly-by-night success story -- Baker has published more than 600 papers in his career -- and neither was AlphaFold2, the Google DeepMind project that was awarded the prize by the committee.
[11]
AI is having its Nobel moment. Do scientists need the tech industry to sustain it?
Hours after the artificial intelligence pioneer Geoffrey Hinton won a Nobel Prize in physics, he drove a rented car to Google's California headquarters to celebrate. Hinton doesn't work at Google anymore. Nor did the longtime professor at the University of Toronto do his pioneering research at the tech giant. But his impromptu party reflected AI's moment as a commercial blockbuster that has also reached the pinnacles of scientific recognition. That was Tuesday. Then, early Wednesday, two employees of Google's AI division won a Nobel Prize in chemistry for using AI to predict and design novel proteins. "This is really a testament to the power of computer science and artificial intelligence," said Jeanette Wing, a professor of computer science at Columbia University. Asked about the historic back-to-back science awards for AI work in an email Wednesday, Hinton said only: "Neural networks are the future." It didn't always seem that way for researchers who decades ago experimented with interconnected computer nodes inspired by neurons in the human brain. Hinton shares this year's physics Nobel with another scientist, John Hopfield, for helping develop those building blocks of machine learning. Neural network advances came from "basic, curiosity-driven research," Hinton said at a press conference after his win. "Not out of throwing money at applied problems, but actually letting scientists follow their curiosity to try and understand things." Such work started well before Google existed. But a bountiful tech industry has now made it easier for AI scientists to pursue their ideas even as it has challenged them with new ethical questions about the societal impacts of their work. One reason why the current wave of AI research is so closely tied to the tech industry is that only a handful of corporations have the resources to build the most powerful AI systems. "These discoveries and this capability could not happen without humongous computational power and humongous amounts of digital data," Wing said. "There are very few companies -- tech companies -- that have that kind of computational power. Google is one. Microsoft is another." The chemistry Nobel Prize awarded Wednesday went to Demis Hassabis and John Jumper of Google's London-based DeepMind laboratory along with researcher David Baker at the University of Washington for work that could help discover new medicines. Hassabis, the CEO and co-founder of DeepMind, which Google acquired in 2014, told the AP in an interview Wednesday his dream was to model his research laboratory on the "incredible storied history" of Bell Labs. Started in 1925, the New Jersey-based industrial lab was the workplace of multiple Nobel-winning scientists over several decades who helped develop modern computing and telecommunications. "I wanted to recreate a modern day industrial research lab that really did cutting-edge research," Hassabis said. "But of course, that needs a lot of patience and a lot of support. We've had that from Google and it's been amazing." Hinton joined Google late in his career and quit last year so he could talk more freely about his concerns about AI's dangers, particularly what happens if humans lose control of machines that become smarter than us. But he stops short of criticizing his former employer. Hinton, 76, said he was staying in a cheap hotel in Palo Alto, California when the Nobel committee woke him up with a phone call early Tuesday morning, leading him to cancel a medical appointment scheduled for later that day. By the time the sleep-deprived scientist reached the Google campus in nearby Mountain View, he "seemed pretty lively and not very tired at all" as colleagues popped bottles of champagne, said computer scientist Richard Zemel, a former doctoral student of Hinton's who joined him at the Google party Tuesday. "Obviously there are these big companies now that are trying to cash in on all the commercial success and that is exciting," said Zemel, now a Columbia professor. But Zemel said what's more important to Hinton and his closest colleagues has been what the Nobel recognition means to the fundamental research they spent decades trying to advance. Guests included Google executives and another former Hinton student, Ilya Sutskever, a co-founder and former chief scientist and board member at ChatGPT maker OpenAI. Sutskever helped lead a group of board members who briefly ousted OpenAI CEO Sam Altman last year in turmoil that has symbolized the industry's conflicts. An hour before the party, Hinton used his Nobel bully pulpit to throw shade at OpenAI during opening remarks at a virtual press conference organized by the University of Toronto in which he thanked former mentors and students. "I'm particularly proud of the fact that one of my students fired Sam Altman," Hinton said. Asked to elaborate, Hinton said OpenAI started with a primary objective to develop better-than-human artificial general intelligence "and ensure that it was safe." "And over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that's unfortunate," Hinton said. In response, OpenAI said in a statement that it is "proud of delivering the most capable and safest AI systems" and that they "safely serve hundreds of millions of people each week." Conflicts are likely to persist in a field where building even a relatively modest AI system requires resources "well beyond those of your typical research university," said Michael Kearns, a professor of computer science at the University of Pennsylvania. But Kearns, who sits on the committee that picks the winners of computer science's top prize -- the Turing Award -- said this week marks a "great victory for interdisciplinary research" that was decades in the making. Hinton is only the second person to win both a Nobel and Turing. The first, Turing-winning political scientist Herbert Simon, started working on what he called "computer simulation of human cognition" in the 1950s and won the Nobel economics prize in 1978 for his study of organizational decision-making. Wing, who met Simon in her early career, said scientists are still just at the tip of finding ways to apply computing's most powerful capabilities to other fields. "We're just at the beginning in terms of scientific discovery using AI," she said.
[12]
AI is having its Nobel moment. Do scientists need the tech industry to sustain it?
Hours after the artificial intelligence pioneer Geoffrey Hinton won a Nobel Prize in physics, he drove a rented car to Google's California headquarters to celebrate Hours after the artificial intelligence pioneer Geoffrey Hinton won a Nobel Prize in physics, he drove a rented car to Google's California headquarters to celebrate. Hinton doesn't work at Google anymore. Nor did the longtime professor at the University of Toronto do his pioneering research at the tech giant. But his impromptu party reflected AI's moment as a commercial blockbuster that has also reached the pinnacles of scientific recognition. That was Tuesday. Then, early Wednesday, two employees of Google's AI division won a Nobel Prize in chemistry for using AI to predict and design novel proteins. "This is really a testament to the power of computer science and artificial intelligence," said Jeanette Wing, a professor of computer science at Columbia University. Asked about the historic back-to-back science awards for AI work in an email Wednesday, Hinton said only: "Neural networks are the future." It didn't always seem that way for researchers who decades ago experimented with interconnected computer nodes inspired by neurons in the human brain. Hinton shares this year's physics Nobel with another scientist, John Hopfield, for helping develop those building blocks of machine learning. Neural network advances came from "basic, curiosity-driven research," Hinton said at a press conference after his win. "Not out of throwing money at applied problems, but actually letting scientists follow their curiosity to try and understand things." Such work started well before Google existed. But a bountiful tech industry has now made it easier for AI scientists to pursue their ideas even as it has challenged them with new ethical questions about the societal impacts of their work. One reason why the current wave of AI research is so closely tied to the tech industry is that only a handful of corporations have the resources to build the most powerful AI systems. "These discoveries and this capability could not happen without humongous computational power and humongous amounts of digital data," Wing said. "There are very few companies -- tech companies -- that have that kind of computational power. Google is one. Microsoft is another." The chemistry Nobel Prize awarded Wednesday went to Demis Hassabis and John Jumper of Google's London-based DeepMind laboratory along with researcher David Baker at the University of Washington for work that could help discover new medicines. Hassabis, the CEO and co-founder of DeepMind, which Google acquired in 2014, told the AP in an interview Wednesday his dream was to model his research laboratory on the "incredible storied history" of Bell Labs. Started in 1925, the New Jersey-based industrial lab was the workplace of multiple Nobel-winning scientists over several decades who helped develop modern computing and telecommunications. "I wanted to recreate a modern day industrial research lab that really did cutting-edge research," Hassabis said. "But of course, that needs a lot of patience and a lot of support. We've had that from Google and it's been amazing." Hinton joined Google late in his career and quit last year so he could talk more freely about his concerns about AI's dangers, particularly what happens if humans lose control of machines that become smarter than us. But he stops short of criticizing his former employer. Hinton, 76, said he was staying in a cheap hotel in Palo Alto, California when the Nobel committee woke him up with a phone call early Tuesday morning, leading him to cancel a medical appointment scheduled for later that day. By the time the sleep-deprived scientist reached the Google campus in nearby Mountain View, he "seemed pretty lively and not very tired at all" as colleagues popped bottles of champagne, said computer scientist Richard Zemel, a former doctoral student of Hinton's who joined him at the Google party Tuesday. "Obviously there are these big companies now that are trying to cash in on all the commercial success and that is exciting," said Zemel, now a Columbia professor. But Zemel said what's more important to Hinton and his closest colleagues has been what the Nobel recognition means to the fundamental research they spent decades trying to advance. Guests included Google executives and another former Hinton student, Ilya Sutskever, a co-founder and former chief scientist and board member at ChatGPT maker OpenAI. Sutskever helped lead a group of board members who briefly ousted OpenAI CEO Sam Altman last year in turmoil that has symbolized the industry's conflicts. An hour before the party, Hinton used his Nobel bully pulpit to throw shade at OpenAI during opening remarks at a virtual press conference organized by the University of Toronto in which he thanked former mentors and students. "I'm particularly proud of the fact that one of my students fired Sam Altman," Hinton said. Asked to elaborate, Hinton said OpenAI started with a primary objective to develop better-than-human artificial general intelligence "and ensure that it was safe." "And over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that's unfortunate," Hinton said. In response, OpenAI said in a statement that it is "proud of delivering the most capable and safest AI systems" and that they "safely serve hundreds of millions of people each week." Conflicts are likely to persist in a field where building even a relatively modest AI system requires resources "well beyond those of your typical research university," said Michael Kearns, a professor of computer science at the University of Pennsylvania. But Kearns, who sits on the committee that picks the winners of computer science's top prize -- the Turing Award -- said this week marks a "great victory for interdisciplinary research" that was decades in the making. Hinton is only the second person to win both a Nobel and Turing. The first, Turing-winning political scientist Herbert Simon, started working on what he called "computer simulation of human cognition" in the 1950s and won the Nobel economics prize in 1978 for his study of organizational decision-making. Wing, who met Simon in her early career, said scientists are still just at the tip of finding ways to apply computing's most powerful capabilities to other fields. "We're just at the beginning in terms of scientific discovery using AI," she said. -- -- AP Business Writer Kelvin Chan contributed to this report.
[13]
AI is having its Nobel moment. Do scientists need the tech industry to sustain it?
Hours after the artificial intelligence pioneer Geoffrey Hinton won a Nobel Prize in physics, he drove a rented car to Google's California headquarters to celebrate. Hinton doesn't work at Google anymore. Nor did the longtime professor at the University of Toronto do his pioneering research at the tech giant. But his impromptu party reflected AI's moment as a commercial blockbuster that has also reached the pinnacles of scientific recognition. That was Tuesday. Then, early Wednesday, two employees of Google's AI division won a Nobel Prize in chemistry for using AI to predict and design novel proteins. "This is really a testament to the power of computer science and artificial intelligence," said Jeanette Wing, a professor of computer science at Columbia University. Asked about the historic back-to-back science awards for AI work in an email Wednesday, Hinton said only: "Neural networks are the future." It didn't always seem that way for researchers who decades ago experimented with interconnected computer nodes inspired by neurons in the human brain. Hinton shares this year's physics Nobel with another scientist, John Hopfield, for helping develop those building blocks of machine learning. Neural network advances came from "basic, curiosity-driven research," Hinton said at a press conference after his win. "Not out of throwing money at applied problems, but actually letting scientists follow their curiosity to try and understand things." Such work started well before Google existed. But a bountiful tech industry has now made it easier for AI scientists to pursue their ideas even as it has challenged them with new ethical questions about the societal impacts of their work. One reason why the current wave of AI research is so closely tied to the tech industry is that only a handful of corporations have the resources to build the most powerful AI systems. "These discoveries and this capability could not happen without humongous computational power and humongous amounts of digital data," Wing said. "There are very few companies -- tech companies -- that have that kind of computational power. Google is one. Microsoft is another." The chemistry Nobel Prize awarded Wednesday went to Demis Hassabis and John Jumper of Google's London-based DeepMind laboratory along with researcher David Baker at the University of Washington for work that could help discover new medicines. Hassabis, the CEO and co-founder of DeepMind, which Google acquired in 2014, said Wednesday his dream was to model his research laboratory on the "incredible storied history" of Bell Labs. Started in 1925, the New Jersey-based industrial lab was the workplace of multiple Nobel-winning scientists over several decades who helped develop modern computing and telecommunications. "I wanted to recreate a modern day industrial research lab that really did cutting-edge research," Hassabis said. "But of course, that needs a lot of patience and a lot of support. We've had that from Google and it's been amazing." Hinton joined Google late in his career and quit last year so he could talk more freely about his concerns about AI's dangers, particularly what happens if humans lose control of machines that become smarter than us. But he stops short of criticizing his former employer. Hinton, 76, said he was staying in a cheap hotel in Palo Alto, California when the Nobel committee woke him up with a phone call early Tuesday morning, leading him to cancel a medical appointment scheduled for later that day. By the time the sleep-deprived scientist reached the Google campus in nearby Mountain View, he "seemed pretty lively and not very tired at all" as colleagues popped bottles of champagne, said computer scientist Richard Zemel, a former doctoral student of Hinton's who joined him at the Google party Tuesday. "Obviously there are these big companies now that are trying to cash in on all the commercial success and that is exciting," said Zemel, now a Columbia professor. But Zemel said what's more important to Hinton and his closest colleagues has been what the Nobel recognition means to the fundamental research they spent decades trying to advance. Guests included Google executives and another former Hinton student, Ilya Sutskever, a co-founder and former chief scientist and board member at ChatGPT maker OpenAI. Sutskever helped lead a group of board members who briefly ousted OpenAI CEO Sam Altman last year in turmoil that has symbolized the industry's conflicts. An hour before the party, Hinton used his Nobel bully pulpit to throw shade at OpenAI during opening remarks at a virtual press conference organized by the University of Toronto in which he thanked former mentors and students. "I'm particularly proud of the fact that one of my students fired Sam Altman," Hinton said. Asked to elaborate, Hinton said OpenAI started with a primary objective to develop better-than-human artificial general intelligence "and ensure that it was safe." "And over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that's unfortunate," Hinton said. In response, OpenAI said in a statement that it is "proud of delivering the most capable and safest AI systems" and that they "safely serve hundreds of millions of people each week." Conflicts are likely to persist in a field where building even a relatively modest AI system requires resources "well beyond those of your typical research university," said Michael Kearns, a professor of computer science at the University of Pennsylvania. But Kearns, who sits on the committee that picks the winners of computer science's top prize -- the Turing Award -- said this week marks a "great victory for interdisciplinary research" that was decades in the making. Hinton is only the second person to win both a Nobel and Turing. The first, Turing-winning political scientist Herbert Simon, started working on what he called "computer simulation of human cognition" in the 1950s and won the Nobel economics prize in 1978 for his study of organizational decision-making. Wing, who met Simon in her early career, said scientists are still just at the tip of finding ways to apply computing's most powerful capabilities to other fields. "We're just at the beginning in terms of scientific discovery using AI," she said. -- -- AP Business Writer Kelvin Chan contributed to this report.
[14]
How artificial intelligence won the Nobel Prizes
Sir Demis Hassabis discovered he had won the Nobel Prize in chemistry this week when his wife -- also a scientific researcher -- received several calls on Skype to urgently request his phone number. "My mind was completely frazzled, which hardly ever happens. It was . . . almost like an out-of-body experience," said Hassabis, co-founder and chief executive of Google DeepMind, the artificial intelligence division of the Silicon Valley search giant. The chemistry Nobel, which Hassabis shared with his colleague John Jumper and US biochemist David Baker, was won for unlocking an impossible problem in biology that had remained unsolved for 50 years: predicting the structure of every protein known to humanity, using an AI software known as AlphaFold. Having cracked that long-standing challenge, with widespread implications in science and medicine, Hassabis has his sights set on climate change and healthcare. "I want us to help solve some diseases," he told the Financial Times. His team is working on six drug development programmes with drugmakers Eli Lilly and Novartis, which focus on disease areas such as cancers and Alzheimer's. Hassabis said he expects to have a drug candidate in clinical trials within two years. His other big areas of focus are using AI to model the climate more accurately, and to cross the ultimate frontier in AI research: invent machine intelligence at par with human intelligence. "When we look back in 10 years, I hope [AI] will have heralded a new golden era of scientific discovery in all these different domains," said Hassabis, who was formerly a neuroscientist and video game designer. "That's what got me into AI in the first place. I see it as the ultimate tool in accelerating scientific research." The DeepMind duo was recognised on Wednesday, a day after former Google colleague and veteran AI scientist Geoffrey Hinton won the physics prize alongside physicist John Hopfield for their work on neural networks, the foundational technology for modern AI systems that underpin healthcare, social media, self-driving cars -- and AlphaFold itself. The recognition of AI breakthroughs highlights a new era in research, emphasising the importance of computing tools and data science in cracking complex scientific problems at far shorter timescales, in everything from physics to mathematics, chemistry and biology. "It's obviously interesting that the [Nobel] committee has decided to make a statement like this by having the two together," Hassabis said. The awards also encapsulate AI's promises and potential pitfalls. Hopfield and Hinton were pioneers in the discipline in the early 1980s. Hinton, who is 76 and left Google last year, said he didn't plan to do further research. He instead intends to advocate for work on the safety of AI systems, and for governments to facilitate it. By contrast, the DeepMind pair won for work unveiled mainly in the past five years, and remain extremely optimistic about its societal impact. "The impact of [AI] in particular on science but also on the modern world more broadly is now very, very clear," said Maneesh Sahani, director of the Gatsby unit at University College London, a research institute focused on machine learning and theoretical neuroscience. Hinton was the Gatsby's founding director in 1998, while Hassabis worked as a postdoctoral researcher there in 2009, eventually spinning out DeepMind from the UCL institute in 2010. "Machine learning is showing up all over the place, from people analysing ancient text in forgotten languages, to radiographs and other medical imaging. There is a toolkit that we now have that will push science and academic disciplines forward in all sorts of different directions," said Sahani, who is also a neuroscience professor. AlphaFold's recent iterations have "ramifications across all of medicine, biology and many other areas" because they are so fundamental to living organisms, said Charlotte Deane, a professor of structural bioinformatics at Oxford university. "Many were sceptical when they started, but very quickly their program outperformed all other programs to predict protein structures," said Venki Ramakrishnan, a biologist who won the Nobel Prize in chemistry in 2009 for his work related to protein synthesis. "It really dramatically changed the field." AlphaFold has been used by more than 2mn scientists to, among other things, analyse the malarial parasite to develop a vaccine, improve plant resistance to climate change, and to study the structure of the nuclear pore -- one of the largest protein complexes in the human body. Rosalyn Moran, a neuroscience professor at King's College London, and chief executive of AI start-up Stanhope AI said: "Tool building is blue collar scientific work . . . they are often the unsung heroes of science. For me that was the most exciting part of the award." AlphaFold still has shortcomings as reported by its creators earlier this year, including "hallucinations" of "spurious structural order" in cell regions that are in fact disordered. Another challenge facing the use of AI for scientific research is that some important fields of investigation may be less rich than protein analysis in experimental data. In the physics Nobel, Hinton and Hopfield's work used fundamental concepts from physics and neuroscience to develop AI tools that can process patterns in large information networks. The Boltzmann machine, which Hinton invented, was able to learn from specific examples rather than instructions. The machine was then able to recognise new examples of categories it had been trained on, such as images of cats. This type of learning software, known as neural networks, now form the basis of most AI applications, such as facial recognition software and large language models, the technique that underpins ChatGPT and Google's Gemini. One of Hinton's former students, Ilya Sutskever, was co-founder and chief scientist of ChatGPT-maker OpenAI. "I would say I am someone who doesn't really know what field he's in but would like to understand how the brain works," said Hinton, a computer scientist and cognitive psychologist, during a press conference this week. "And in my attempts to understand how the brain works, I've helped to create a technology that works surprisingly well." The AI prizes have also brought to the fore the interconnected nature of scientific discoveries, and the need for sharing of data and expertise -- an increasingly rare phenomenon in AI research occurring inside commercial outfits such as OpenAI and Google. Neuroscience and physics principles were used to develop the AI models of today, while the data generated by biologists helped invent the AlphaFold software. "Scientists like me have traditionally solved protein shapes using laborious experimental methods which can take years," said Rivka Isaacson, professor of molecular biophysics at King's College London, who was an early beta tester of AlphaFold. "It was however these solved structures, which the experimental world deposits for public use, that were used to train AlphaFold." She added that the AI technique had allowed scientists like her to "skip ahead to probe deeper into protein function and dynamics, asking different questions and potentially opening up whole new areas of research". Ultimately, AI -- like electron microscopy or X-ray crystallography -- remains an analytical tool, not an independent agent conducting original research. Hassabis insists the technology cannot replace the work of scientists. "The human ingenuity comes in -- asking the question, the conjecture, the hypothesis, our systems can't do any of that," he said. "[AI] just analyses data right now."
[15]
AI is having its Nobel moment. Do scientists need the tech industry to sustain it?
Hours after the artificial intelligence pioneer Geoffrey Hinton won a Nobel Prize in physics, he drove a rented car to Google's California headquarters to celebrate. Hinton doesn't work at Google anymore. Nor did the longtime professor at the University of Toronto do his pioneering research at the tech giant. But his impromptu party reflected AI's moment as a commercial blockbuster that has also reached the pinnacles of scientific recognition. That was Tuesday. Then, early Wednesday, two employees of Google's AI division won a Nobel Prize in chemistry for using AI to predict and design novel proteins. "This is really a testament to the power of computer science and artificial intelligence," said Jeanette Wing, a professor of computer science at Columbia University. Asked about the historic back-to-back science awards for AI work in an email Wednesday, Hinton said only: "Neural networks are the future." It didn't always seem that way for researchers who decades ago experimented with interconnected computer nodes inspired by neurons in the human brain. Hinton shares this year's physics Nobel with another scientist, John Hopfield, for helping develop those building blocks of machine learning. Neural network advances came from "basic, curiosity-driven research," Hinton said at a press conference after his win. "Not out of throwing money at applied problems, but actually letting scientists follow their curiosity to try and understand things." Such work started well before Google existed. But a bountiful tech industry has now made it easier for AI scientists to pursue their ideas even as it has challenged them with new ethical questions about the societal impacts of their work. One reason why the current wave of AI research is so closely tied to the tech industry is that only a handful of corporations have the resources to build the most powerful AI systems. "These discoveries and this capability could not happen without humongous computational power and humongous amounts of digital data," Wing said. "There are very few companies -- tech companies -- that have that kind of computational power. Google is one. Microsoft is another." The chemistry Nobel Prize awarded Wednesday went to Demis Hassabis and John Jumper of Google's London-based DeepMind laboratory along with researcher David Baker at the University of Washington for work that could help discover new medicines. Hassabis, the CEO and co-founder of DeepMind, which Google acquired in 2014, told the AP in an interview Wednesday his dream was to model his research laboratory on the "incredible storied history" of Bell Labs. Started in 1925, the New Jersey-based industrial lab was the workplace of multiple Nobel-winning scientists over several decades who helped develop modern computing and telecommunications. "I wanted to recreate a modern day industrial research lab that really did cutting-edge research," Hassabis said. "But of course, that needs a lot of patience and a lot of support. We've had that from Google and it's been amazing." Hinton joined Google late in his career and quit last year so he could talk more freely about his concerns about AI's dangers, particularly what happens if humans lose control of machines that become smarter than us. But he stops short of criticizing his former employer. Hinton, 76, said he was staying in a cheap hotel in Palo Alto, California when the Nobel committee woke him up with a phone call early Tuesday morning, leading him to cancel a medical appointment scheduled for later that day. By the time the sleep-deprived scientist reached the Google campus in nearby Mountain View, he "seemed pretty lively and not very tired at all" as colleagues popped bottles of champagne, said computer scientist Richard Zemel, a former doctoral student of Hinton's who joined him at the Google party Tuesday. "Obviously there are these big companies now that are trying to cash in on all the commercial success and that is exciting," said Zemel, now a Columbia professor. But Zemel said what's more important to Hinton and his closest colleagues has been what the Nobel recognition means to the fundamental research they spent decades trying to advance. Guests included Google executives and another former Hinton student, Ilya Sutskever, a co-founder and former chief scientist and board member at ChatGPT maker OpenAI. Sutskever helped lead a group of board members who briefly ousted OpenAI CEO Sam Altman last year in turmoil that has symbolized the industry's conflicts. An hour before the party, Hinton used his Nobel bully pulpit to throw shade at OpenAI during opening remarks at a virtual press conference organized by the University of Toronto in which he thanked former mentors and students. "I'm particularly proud of the fact that one of my students fired Sam Altman," Hinton said. Asked to elaborate, Hinton said OpenAI started with a primary objective to develop better-than-human artificial general intelligence "and ensure that it was safe." "And over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that's unfortunate," Hinton said. In response, OpenAI said in a statement that it is "proud of delivering the most capable and safest AI systems" and that they "safely serve hundreds of millions of people each week." Conflicts are likely to persist in a field where building even a relatively modest AI system requires resources "well beyond those of your typical research university," said Michael Kearns, a professor of computer science at the University of Pennsylvania. But Kearns, who sits on the committee that picks the winners of computer science's top prize -- the Turing Award -- said this week marks a "great victory for interdisciplinary research" that was decades in the making. Hinton is only the second person to win both a Nobel and Turing. The first, Turing-winning political scientist Herbert Simon, started working on what he called "computer simulation of human cognition" in the 1950s and won the Nobel economics prize in 1978 for his study of organizational decision-making. Wing, who met Simon in her early career, said scientists are still just at the tip of finding ways to apply computing's most powerful capabilities to other fields. "We're just at the beginning in terms of scientific discovery using AI," she said. -- -- AP Business Writer Kelvin Chan contributed to this report.
[16]
AI is having its Nobel moment. Do scientists need the tech industry to sustain it?
Hours after the artificial intelligence pioneer Geoffrey Hinton won a Nobel Prize in physics, he drove a rented car to Google's California headquarters to celebrate. Hinton doesn't work at Google anymore. Nor did the longtime professor at the University of Toronto do his pioneering research at the tech giant. But his impromptu party reflected AI's moment as a commercial blockbuster that has also reached the pinnacles of scientific recognition. That was Tuesday. Then, early Wednesday, two employees of Google's AI division won a Nobel Prize in chemistry for using AI to predict and design novel proteins. "This is really a testament to the power of computer science and artificial intelligence," said Jeanette Wing, a professor of computer science at Columbia University. Asked about the historic back-to-back science awards for AI work in an email Wednesday, Hinton said only: "Neural networks are the future." It didn't always seem that way for researchers who decades ago experimented with interconnected computer nodes inspired by neurons in the human brain. Hinton shares this year's physics Nobel with another scientist, John Hopfield, for helping develop those building blocks of machine learning. Neural network advances came from "basic, curiosity-driven research," Hinton said at a press conference after his win. "Not out of throwing money at applied problems, but actually letting scientists follow their curiosity to try and understand things." Such work started well before Google existed. But a bountiful tech industry has now made it easier for AI scientists to pursue their ideas even as it has challenged them with new ethical questions about the societal impacts of their work. One reason why the current wave of AI research is so closely tied to the tech industry is that only a handful of corporations have the resources to build the most powerful AI systems. "These discoveries and this capability could not happen without humongous computational power and humongous amounts of digital data," Wing said. "There are very few companies -- tech companies -- that have that kind of computational power. Google is one. Microsoft is another." The chemistry Nobel Prize awarded Wednesday went to Demis Hassabis and John Jumper of Google's London-based DeepMind laboratory along with researcher David Baker at the University of Washington for work that could help discover new medicines. Hassabis, the CEO and co-founder of DeepMind, which Google acquired in 2014, told the AP in an interview Wednesday his dream was to model his research laboratory on the "incredible storied history" of Bell Labs. Started in 1925, the New Jersey-based industrial lab was the workplace of multiple Nobel-winning scientists over several decades who helped develop modern computing and telecommunications. "I wanted to recreate a modern day industrial research lab that really did cutting-edge research," Hassabis said. "But of course, that needs a lot of patience and a lot of support. We've had that from Google and it's been amazing." Hinton joined Google late in his career and quit last year so he could talk more freely about his concerns about AI's dangers, particularly what happens if humans lose control of machines that become smarter than us. But he stops short of criticizing his former employer. Hinton, 76, said he was staying in a cheap hotel in Palo Alto, California when the Nobel committee woke him up with a phone call early Tuesday morning, leading him to cancel a medical appointment scheduled for later that day. By the time the sleep-deprived scientist reached the Google campus in nearby Mountain View, he "seemed pretty lively and not very tired at all" as colleagues popped bottles of champagne, said computer scientist Richard Zemel, a former doctoral student of Hinton's who joined him at the Google party Tuesday. "Obviously there are these big companies now that are trying to cash in on all the commercial success and that is exciting," said Zemel, now a Columbia professor. But Zemel said what's more important to Hinton and his closest colleagues has been what the Nobel recognition means to the fundamental research they spent decades trying to advance. Guests included Google executives and another former Hinton student, Ilya Sutskever, a co-founder and former chief scientist and board member at ChatGPT maker OpenAI. Sutskever helped lead a group of board members who briefly ousted OpenAI CEO Sam Altman last year in turmoil that has symbolized the industry's conflicts. An hour before the party, Hinton used his Nobel bully pulpit to throw shade at OpenAI during opening remarks at a virtual press conference organized by the University of Toronto in which he thanked former mentors and students. "I'm particularly proud of the fact that one of my students fired Sam Altman," Hinton said. Asked to elaborate, Hinton said OpenAI started with a primary objective to develop better-than-human artificial general intelligence "and ensure that it was safe." "And over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that's unfortunate," Hinton said. In response, OpenAI said in a statement that it is "proud of delivering the most capable and safest AI systems" and that they "safely serve hundreds of millions of people each week." Conflicts are likely to persist in a field where building even a relatively modest AI system requires resources "well beyond those of your typical research university," said Michael Kearns, a professor of computer science at the University of Pennsylvania. But Kearns, who sits on the committee that picks the winners of computer science's top prize -- the Turing Award -- said this week marks a "great victory for interdisciplinary research" that was decades in the making. Hinton is only the second person to win both a Nobel and Turing. The first, Turing-winning political scientist Herbert Simon, started working on what he called "computer simulation of human cognition" in the 1950s and won the Nobel economics prize in 1978 for his study of organizational decision-making. Wing, who met Simon in her early career, said scientists are still just at the tip of finding ways to apply computing's most powerful capabilities to other fields. "We're just at the beginning in terms of scientific discovery using AI," she said. -- -- AP Business Writer Kelvin Chan contributed to this report.
[17]
AI Is Having Its Nobel Moment. Do Scientists Need the Tech Industry to Sustain It?
Hours after the artificial intelligence pioneer Geoffrey Hinton won a Nobel Prize in physics, he drove a rented car to Google's California headquarters to celebrate. Hinton doesn't work at Google anymore. Nor did the longtime professor at the University of Toronto do his pioneering research at the tech giant. But his impromptu party reflected AI's moment as a commercial blockbuster that has also reached the pinnacles of scientific recognition. That was Tuesday. Then, early Wednesday, two employees of Google's AI division won a Nobel Prize in chemistry for using AI to predict and design novel proteins. "This is really a testament to the power of computer science and artificial intelligence," said Jeanette Wing, a professor of computer science at Columbia University. Asked about the historic back-to-back science awards for AI work in an email Wednesday, Hinton said only: "Neural networks are the future." It didn't always seem that way for researchers who decades ago experimented with interconnected computer nodes inspired by neurons in the human brain. Hinton shares this year's physics Nobel with another scientist, John Hopfield, for helping develop those building blocks of machine learning. Neural network advances came from "basic, curiosity-driven research," Hinton said at a press conference after his win. "Not out of throwing money at applied problems, but actually letting scientists follow their curiosity to try and understand things." Such work started well before Google existed. But a bountiful tech industry has now made it easier for AI scientists to pursue their ideas even as it has challenged them with new ethical questions about the societal impacts of their work. One reason why the current wave of AI research is so closely tied to the tech industry is that only a handful of corporations have the resources to build the most powerful AI systems. "These discoveries and this capability could not happen without humongous computational power and humongous amounts of digital data," Wing said. "There are very few companies -- tech companies -- that have that kind of computational power. Google is one. Microsoft is another." The chemistry Nobel Prize awarded Wednesday went to Demis Hassabis and John Jumper of Google's London-based DeepMind laboratory along with researcher David Baker at the University of Washington for work that could help discover new medicines. Hassabis, the CEO and co-founder of DeepMind, which Google acquired in 2014, said Wednesday his dream was to model his research laboratory on the "incredible storied history" of Bell Labs. Started in 1925, the New Jersey-based industrial lab was the workplace of multiple Nobel-winning scientists over several decades who helped develop modern computing and telecommunications. "I wanted to recreate a modern day industrial research lab that really did cutting-edge research," Hassabis said. "But of course, that needs a lot of patience and a lot of support. We've had that from Google and it's been amazing." Hinton joined Google late in his career and quit last year so he could talk more freely about his concerns about AI's dangers, particularly what happens if humans lose control of machines that become smarter than us. But he stops short of criticizing his former employer. Hinton, 76, said he was staying in a cheap hotel in Palo Alto, California when the Nobel committee woke him up with a phone call early Tuesday morning, leading him to cancel a medical appointment scheduled for later that day. By the time the sleep-deprived scientist reached the Google campus in nearby Mountain View, he "seemed pretty lively and not very tired at all" as colleagues popped bottles of champagne, said computer scientist Richard Zemel, a former doctoral student of Hinton's who joined him at the Google party Tuesday. "Obviously there are these big companies now that are trying to cash in on all the commercial success and that is exciting," said Zemel, now a Columbia professor. But Zemel said what's more important to Hinton and his closest colleagues has been what the Nobel recognition means to the fundamental research they spent decades trying to advance. Guests included Google executives and another former Hinton student, Ilya Sutskever, a co-founder and former chief scientist and board member at ChatGPT maker OpenAI. Sutskever helped lead a group of board members who briefly ousted OpenAI CEO Sam Altman last year in turmoil that has symbolized the industry's conflicts. An hour before the party, Hinton used his Nobel bully pulpit to throw shade at OpenAI during opening remarks at a virtual press conference organized by the University of Toronto in which he thanked former mentors and students. "I'm particularly proud of the fact that one of my students fired Sam Altman," Hinton said. Asked to elaborate, Hinton said OpenAI started with a primary objective to develop better-than-human artificial general intelligence "and ensure that it was safe." "And over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that's unfortunate," Hinton said. In response, OpenAI said in a statement that it is "proud of delivering the most capable and safest AI systems" and that they "safely serve hundreds of millions of people each week." Conflicts are likely to persist in a field where building even a relatively modest AI system requires resources "well beyond those of your typical research university," said Michael Kearns, a professor of computer science at the University of Pennsylvania. But Kearns, who sits on the committee that picks the winners of computer science's top prize -- the Turing Award -- said this week marks a "great victory for interdisciplinary research" that was decades in the making. Hinton is only the second person to win both a Nobel and Turing. The first, Turing-winning political scientist Herbert Simon, started working on what he called "computer simulation of human cognition" in the 1950s and won the Nobel economics prize in 1978 for his study of organizational decision-making. Wing, who met Simon in her early career, said scientists are still just at the tip of finding ways to apply computing's most powerful capabilities to other fields. "We're just at the beginning in terms of scientific discovery using AI," she said. -- -- AP Business Writer Kelvin Chan contributed to this report. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[18]
Google's Nobel Prize Winners Stir Debate Over AI Research
LONDON (Reuters) - The award this week of Nobel prizes in chemistry and physics to a small number of artificial intelligence pioneers affiliated with Google has stirred debate over the company's research dominance and how breakthroughs in computer science ought to be recognised. Google has been at the forefront of AI research, but has been forced on the defensive as it tackles competitive pressure from Microsoft-backed OpenAI and mounting regulatory scrutiny from the U.S Department of Justice. On Wednesday, Demis Hassabis - co-founder of Google's AI unit DeepMind - and colleague John Jumper were awarded the Nobel prize in chemistry, alongside U.S. biochemist David Baker, for their work decoding the structures of microscopic proteins. Former Google researcher Geoffrey Hinton, meanwhile, won the Nobel prize for physics on Tuesday, alongside U.S. scientist John Hopfield, for earlier discoveries in machine learning that paved the way for the AI boom. Professor Dame Wendy Hall, a computer scientist and advisor on AI to the United Nations, told Reuters that, while the recipients' work deserved recognition, the lack of a Nobel prize for mathematics or computer science had distorted the outcome. "The Nobel prize committee doesn't want to miss out on this AI stuff, so it's very creative of them to push Geoffrey through the physics route," she said. "I would argue both are dubious, but nonetheless worthy of a Nobel prize in terms of the science they've done. So how else are you going to reward them?" Noah Giansiracusa, an associate maths professor at Bentley University and author of "How Algorithms Create and Prevent Fake News", also argued that Hinton's win was questionable. "What he did was phenomenal, but was it physics? I don't think so. Even if there's inspiration from physics, they're not developing a new theory in physics or solving a longstanding problem in physics." The Nobel prize categories for achievements in medicine or physiology, physics, chemistry, literature and peace were laid down in the will of Swedish inventor Alfred Nobel, who died in 1895. The prize for economics is a later addition established with an endowment from the Swedish central bank in 1968. DOMINANCE Regulators in the U.S. are currently circling Google for a potential break-up, which could force it to divest parts of its business, such as its Chrome browser and Android operating system, which some argue allow it to maintain an illegal monopoly in online search. The profits derived from its leading position have allowed Google and other Big Tech companies to outpace traditional academia in publishing groundbreaking AI research. Hinton himself has expressed some regrets about his life's work, quitting Google last year so that he could speak freely about the dangers of AI, and warning that computers could become smarter than people far sooner than previously expected. Speaking at a press conference Tuesday, he said: "I wish I had a sort of simple recipe that if you do this, everything's going to be okay, but I don't, in particular with respect to the existential threat of these things getting out of control and taking over." When he quit Google in 2023 over his AI concerns, Hinton said the company itself acted very responsibly. For some, this week's Nobel prize wins underscore how hard it is becoming for traditional academia to compete. Giansiracusa told Reuters there was a need for greater public investment in research. "So much of Big Tech is not oriented towards the next deep-learning breakthrough, but making money by pushing chatbots or putting ads all over the internet," he said. "There are pockets of innovation, but much of it is very unscientific." (Reporting by Martin Coulter; Editing by Matt Scuffham and Alex Richardson)
[19]
Google's Nobel prize winners stir debate over AI research
LONDON (Reuters) - The award this week of Nobel prizes in chemistry and physics to a small number of artificial intelligence pioneers affiliated with Google has stirred debate over the company's research dominance and how breakthroughs in computer science ought to be recognised. Google has been at the forefront of AI research, but has been forced on the defensive as it tackles competitive pressure from Microsoft-backed OpenAI and mounting regulatory scrutiny from the U.S Department of Justice. On Wednesday, Demis Hassabis - co-founder of Google's AI unit DeepMind - and colleague John Jumper were awarded the Nobel prize in chemistry, alongside U.S. biochemist David Baker, for their work decoding the structures of microscopic proteins. Former Google researcher Geoffrey Hinton, meanwhile, won the Nobel prize for physics on Tuesday, alongside U.S. scientist John Hopfield, for earlier discoveries in machine learning that paved the way for the AI boom. Professor Dame Wendy Hall, a computer scientist and advisor on AI to the United Nations, told Reuters that, while the recipients' work deserved recognition, the lack of a Nobel prize for mathematics or computer science had distorted the outcome. "The Nobel prize committee doesn't want to miss out on this AI stuff, so it's very creative of them to push Geoffrey through the physics route," she said. "I would argue both are dubious, but nonetheless worthy of a Nobel prize in terms of the science they've done. So how else are you going to reward them?" Noah Giansiracusa, an associate maths professor at Bentley University and author of "How Algorithms Create and Prevent Fake News", also argued that Hinton's win was questionable. "What he did was phenomenal, but was it physics? I don't think so. Even if there's inspiration from physics, they're not developing a new theory in physics or solving a longstanding problem in physics." The Nobel prize categories for achievements in medicine or physiology, physics, chemistry, literature and peace were laid down in the will of Swedish inventor Alfred Nobel, who died in 1895. The prize for economics is a later addition established with an endowment from the Swedish central bank in 1968. DOMINANCE Regulators in the U.S. are currently circling Google for a potential break-up, which could force it to divest parts of its business, such as its Chrome browser and Android operating system, which some argue allow it to maintain an illegal monopoly in online search. The profits derived from its leading position have allowed Google and other Big Tech companies to outpace traditional academia in publishing groundbreaking AI research. Hinton himself has expressed some regrets about his life's work, quitting Google last year so that he could speak freely about the dangers of AI, and warning that computers could become smarter than people far sooner than previously expected. Speaking at a press conference Tuesday, he said: "I wish I had a sort of simple recipe that if you do this, everything's going to be okay, but I don't, in particular with respect to the existential threat of these things getting out of control and taking over." When he quit Google in 2023 over his AI concerns, Hinton said the company itself acted very responsibly. For some, this week's Nobel prize wins underscore how hard it is becoming for traditional academia to compete. Giansiracusa told Reuters there was a need for greater public investment in research. "So much of Big Tech is not oriented towards the next deep-learning breakthrough, but making money by pushing chatbots or putting ads all over the internet," he said. "There are pockets of innovation, but much of it is very unscientific." (Reporting by Martin Coulter; Editing by Matt Scuffham and Alex Richardson)
[20]
AI steps into science limelight with Nobel wins
For long periods of its history, artificial intelligence has lurked in the hinterland of science, often unloved and unfunded -- but two Nobel prizes in one week suggest its time in the sunshine has finally arrived. First on Tuesday, Geoffrey Hinton and John Hopfield won the physics prize for their pioneering work in creating the foundations of modern AI. Then on Wednesday, David Baker, John Jumper and Demis Hassabis shared the chemistry prize for work revealing the secrets of proteins through AI.
[21]
Scientists sound AI alarm after winning physics Nobel
"In the same circumstances, I would do the same again, but I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control," Hinton, 76, told reporters after the announcement. Hinton, known as "the Godfather of AI", raised eyebrows in 2023 when he quit his job at Google to warn of the "profound risks to society and humanity" of the technology.British-Canadian Geoffrey Hinton and American John Hopfield won the Nobel physics prize on Tuesday for their pioneering work on the foundations of artificial intelligence, with both sounding the alarm over the technology they helped bring to life. The pair's research on neural networks in the 1980s paved the way for today's deep-learning systems that promise to revolutionise society but have also raised apocalyptic fears. "In the same circumstances, I would do the same again, but I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control," Hinton, 76, told reporters after the announcement. Hinton, known as "the Godfather of AI", raised eyebrows in 2023 when he quit his job at Google to warn of the "profound risks to society and humanity" of the technology. In March last year, when asked whether AI could wipe out humanity, Hinton replied: "It's not inconceivable." The pair were honoured "for foundational discoveries and inventions that enable machine learning with artificial neural networks", the jury said. Ellen Moons, chair of the Nobel Committee for Physics, told a press conference that these tools have become part of our daily lives, including in facial recognition and language translation. While lauding the potential of AI, Moons noted that "its rapid development has also raised concerns about our future collectively". "Humans carry the responsibility for using this new technology in a safe and ethical way," she said. Hopfield, a professor emeritus at Princeton, was spotlighted for having created the "Hopfield network", also known as associative memory, which can be used to "store and reconstruct images and other types of patterns in data". The physicist joined Hinton in calling for a deeper understanding of modern AI systems to prevent them spiralling out of control, calling recent advances in the technology "very unnerving". "You don't know that the collective properties you began with are actually the collective properties with all the interactions present, and you don't therefore know whether some spontaneous but unwanted thing is lying hidden in the works," the physicist told a gathering at his university via video link. 'Exceed people's intellectual ability' The jury said Hinton, a 76-year-old professor at the University of Toronto, used the Hopfield network as a foundation for a new network: "the Boltzmann machine". Hinton was credited with inventing "a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures". "I'm flabbergasted, I had no idea this would happen," Hinton told reporters in a phone interview as the laureates were announced in Stockholm. Hinton said he was an avid user of AI tools such as ChatGPT, and said he believed the technology will have "a huge influence". "It will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability," Hinton said. Canadian Prime Minister Justin Trudeau hailed his countryman's accomplishment, saying "Canada is at the forefront of AI technology thanks to trailblazers like Dr. Hinton". "His curiosity for discovery and contributions to innovation will inspire generations to come. On behalf of all Canadians, I congratulate him on his remarkable achievement." Nobel season The Nobel Prize in Physics is the second Nobel of the season after the Medicine Prize on Monday was awarded to American scientists Victor Ambros and Gary Ruvkun. The US duo were honoured for their discovery of microRNA and its role in how genes are regulated. Awarded since 1901, the Nobel Prizes honour those who have, in the words of prize creator and scientist Alfred Nobel, "conferred the greatest benefit on humankind". The physics prize will be followed by the chemistry prize on Wednesday, with the highly watched literature and peace prizes to be announced on Thursday and Friday respectively. The economics prize wraps up the 2024 Nobel season on October 14. The winners will receive their prize, consisting of a diploma, a gold medal and a $1 million cheque, from King Carl XVI Gustaf in Stockholm on December 10, the anniversary of the 1896 death of scientist Alfred Nobel who created the prizes in his will.
[22]
Scientists sound AI alarm after winning physics Nobel
British-Canadian Geoffrey Hinton and American John Hopfield won the Nobel physics prize on Tuesday for their pioneering work on the foundations of artificial intelligence, with both sounding the alarm over the technology they helped bring to life. The pair's research on neural networks in the 1980s paved the way for today's deep-learning systems that promise to revolutionize society but have also raised apocalyptic fears. "In the same circumstances, I would do the same again, but I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control," Hinton, 76, told reporters after the announcement. Hinton, known as "the Godfather of AI", raised eyebrows in 2023 when he quit his job at Google to warn of the "profound risks to society and humanity" of the technology. In March last year, when asked whether AI could wipe out humanity, Hinton replied: "It's not inconceivable." The pair were honored "for foundational discoveries and inventions that enable machine learning with artificial neural networks", the jury said. Ellen Moons, chair of the Nobel Committee for Physics, told a press conference that these tools have become part of our daily lives, including in facial recognition and language translation. While lauding the potential of AI, Moons noted that "its rapid development has also raised concerns about our future collectively". "Humans carry the responsibility for using this new technology in a safe and ethical way," she said. Hopfield, a professor emeritus at Princeton, was spotlighted for having created the "Hopfield network", also known as associative memory, which can be used to "store and reconstruct images and other types of patterns in data". The physicist joined Hinton in calling for a deeper understanding of modern AI systems to prevent them spiraling out of control, calling recent advances in the technology "very unnerving". "You don't know that the collective properties you began with are actually the collective properties with all the interactions present, and you don't therefore know whether some spontaneous but unwanted thing is lying hidden in the works," the physicist told a gathering at his university via video link. 'Exceed people's intellectual ability' The jury said Hinton, a 76-year-old professor at the University of Toronto, used the Hopfield network as a foundation for a new network: "the Boltzmann machine". Hinton was credited with inventing "a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures". "I'm flabbergasted, I had no idea this would happen," Hinton told reporters in a phone interview as the laureates were announced in Stockholm. Hinton said he was an avid user of AI tools such as ChatGPT, and said he believed the technology will have "a huge influence". "It will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability," Hinton said. Canadian Prime Minister Justin Trudeau hailed his countryman's accomplishment, saying "Canada is at the forefront of AI technology thanks to trailblazers like Dr. Hinton". "His curiosity for discovery and contributions to innovation will inspire generations to come. On behalf of all Canadians, I congratulate him on his remarkable achievement." Nobel season The Nobel Prize in Physics is the second Nobel of the season after the Medicine Prize on Monday was awarded to American scientists Victor Ambros and Gary Ruvkun. The US duo were honored for their discovery of microRNA and its role in how genes are regulated. Awarded since 1901, the Nobel Prizes honor those who have, in the words of prize creator and scientist Alfred Nobel, "conferred the greatest benefit on humankind". The physics prize will be followed by the chemistry prize on Wednesday, with the highly watched literature and peace prizes to be announced on Thursday and Friday respectively. The economics prize wraps up the 2024 Nobel season on October 14. The winners will receive their prize, consisting of a diploma, a gold medal and a $1 million check, from King Carl XVI Gustaf in Stockholm on December 10, the anniversary of the 1896 death of scientist Alfred Nobel who created the prizes in his will.
[23]
Why did the 2024 Nobel for physics go to AI-related research?
The award this week of Nobel prizes in chemistry and physics to a small number of artificial intelligence pioneers affiliated with Google has stirred debate over the company's research dominance and how breakthroughs in computer science ought to be recognised. Google has been at the forefront of AI research, but has been forced on the defensive as it tackles competitive pressure from Microsoft-backed OpenAI and mounting regulatory scrutiny from the U.S Department of Justice. On Wednesday, Demis Hassabis -- co-founder of Google's AI unit DeepMind -- and colleague John Jumper were awarded the Nobel prize in chemistry, alongside U.S. biochemist David Baker, for their work decoding the structures of microscopic proteins. Former Google researcher Geoffrey Hinton, meanwhile, won the Nobel prize for physics on Tuesday, alongside U.S. scientist John Hopfield, for earlier discoveries in machine learning that paved the way for the AI boom.
[24]
Google's Nobel prize winners stir debate over AI research
LONDON, Oct 10 (Reuters) - The award this week of Nobel prizes in chemistry and physics to a small number of artificial intelligence pioneers affiliated with Google (GOOGL.O), opens new tab has stirred debate over the company's research dominance and how breakthroughs in computer science ought to be recognised. Google has been at the forefront of AI research, but has been forced on the defensive as it tackles competitive pressure from Microsoft-backed (MSFT.O), opens new tab OpenAI and mounting regulatory scrutiny from the U.S Department of Justice. Advertisement · Scroll to continue On Wednesday, Demis Hassabis - co-founder of Google's AI unit DeepMind - and colleague John Jumper were awarded the Nobel prize in chemistry, alongside U.S. biochemist David Baker, for their work decoding the structures of microscopic proteins. Former Google researcher Geoffrey Hinton, meanwhile, won the Nobel prize for physics on Tuesday, alongside U.S. scientist John Hopfield, for earlier discoveries in machine learning that paved the way for the AI boom. Advertisement · Scroll to continue Professor Dame Wendy Hall, a computer scientist and advisor on AI to the United Nations, told Reuters that, while the recipients' work deserved recognition, the lack of a Nobel prize for mathematics or computer science had distorted the outcome. "The Nobel prize committee doesn't want to miss out on this AI stuff, so it's very creative of them to push Geoffrey through the physics route," she said. "I would argue both are dubious, but nonetheless worthy of a Nobel prize in terms of the science they've done. So how else are you going to reward them?" Noah Giansiracusa, an associate maths professor at Bentley University and author of "How Algorithms Create and Prevent Fake News", also argued that Hinton's win was questionable. "What he did was phenomenal, but was it physics? I don't think so. Even if there's inspiration from physics, they're not developing a new theory in physics or solving a longstanding problem in physics." The Nobel prize categories for achievements in medicine or physiology, physics, chemistry, literature and peace were laid down in the will of Swedish inventor Alfred Nobel, who died in 1895. The prize for economics is a later addition established with an endowment from the Swedish central bank in 1968. DOMINANCE Regulators in the U.S. are currently circling Google for a potential break-up, which could force it to divest parts of its business, such as its Chrome browser and Android operating system, which some argue allow it to maintain an illegal monopoly in online search. The profits derived from its leading position have allowed Google and other Big Tech companies to outpace traditional academia in publishing groundbreaking AI research. Hinton himself has expressed some regrets about his life's work, quitting Google last year so that he could speak freely about the dangers of AI, and warning that computers could become smarter than people far sooner than previously expected. Speaking at a press conference Tuesday, he said: "I wish I had a sort of simple recipe that if you do this, everything's going to be okay, but I don't, in particular with respect to the existential threat of these things getting out of control and taking over." When he quit Google in 2023 over his AI concerns, Hinton said the company itself acted very responsibly. For some, this week's Nobel prize wins underscore how hard it is becoming for traditional academia to compete. Giansiracusa told Reuters there was a need for greater public investment in research. "So much of Big Tech is not oriented towards the next deep-learning breakthrough, but making money by pushing chatbots or putting ads all over the internet," he said. "There are pockets of innovation, but much of it is very unscientific." Reporting by Martin Coulter; Editing by Matt Scuffham and Alex Richardson Our Standards: The Thomson Reuters Trust Principles., opens new tab
[25]
AI research gets two Nobel wins in one week
The big picture: The technical foundations of AI were laid over decades, but its advances have only received wide recognition more recently with the advent of chatbots and the popularization of generative AI. Driving the news: Geoffrey Hinton and John Hopfield were awarded the Nobel Prize in Physics on Tuesday for their work on AI in the late 1970s into the 1980s. The Nobel committee presented the prize in chemistry on Wednesday to Google DeepMind CEO Demis Hassabis, DeepMind director John Jumper and University of Washington professor David Baker for their work on proteins that are crucial to life. Between the lines: The Nobel prize is often awarded for research done decades ago, after its impact can be clearly assessed as having "the greatest benefit to humankind." Hassabis' "longstanding passion and motivation for doing AI" was to one day be able to "build learning systems that are able to help scientists accelerate scientific discovery," he told me last year. Yes, but: "It's far too premature to talk about AI being involved in all prizes," Hassabis said in a press conference on Wednesday. Zoom in: Three of the Nobel prize winners have ties to Google -- Hinton left the company last year, saying he wanted to speak freely about what he and others see as the dangers of AI. What to watch: AI critic Gary Marcus writes that Hinton (and others) have favored advancing AI through ever-expanding neural networks that learn from vast troves of data -- the approach that fuels generative AI.
[26]
An A.I. Pioneer Reflects on His Nobel Moment in an Interview
Cade Metz has been reporting on artificial intelligence for more than 15 years. On Tuesday, the scientists John J. Hopfield and Geoffrey E. Hinton received the Nobel Prize in Physics for discoveries that helped drive the development of artificial neural networks -- a technology that is now essential to the operation of search engines like Google and online chatbots like ChatGPT from OpenAI. The prize was given for a technology that Dr. Hopfield developed in the early 1980s called a Hopfield network and a related technique that Dr. Hinton helped create in the years that followed called a Boltzmann machine. The news surprised many physicists and artificial intelligence experts, including Dr. Hopfield and Dr. Hinton. In 2019, Dr. Hinton was part of a three-person group that received the Turing Award, often called "the Nobel Prize of computing," for its work on neural networks. Last year, he made headlines across the world when he left his job as a researcher at Google and warned that the A.I. technologies he helped create could one day destroy humanity. But he is not a physicist. He was once introduced at an academic conference as someone who had "failed at physics, dropped out of psychology and then joined a field with no standards at all: artificial intelligence." Dr. Hinton, a British native known for his dry, self-deprecating humor, enjoyed repeating this story. But he always added a caveat. "I didn't fail at physics and drop of psychology," he would say. "I failed at psychology and dropped out of physics -- which is far more reputable." The New York Times reached Dr. Hinton by phone shortly after he learned that he had won the Nobel Prize in Physics. This interview has been edited and condensed for clarity. Many congratulations. Sorry, I can't talk. I am about to go on with the BBC. Bye. Hello, again. How was the BBC? We didn't connect. I am in a cheap hotel room with no internet. What was your reaction when you heard this morning's news? I was shocked and amazed and flabbergasted. I never expected it. Neural networks are computer technologies. How does this relate to physics? Hopfield networks and a further development of them called Boltzmann machines were based on physics. Hopfield nets used an energy function, and the Boltzmann machine used ideas from statistical physics. So that stage in the development of neural networks did depend -- a lot -- on ideas from physics. But it was really a different technique -- called backpropagation -- that was used to build the A.I. models that are used today. That has less to do with physics. What is the relationship between the Boltzmann machine and backpropagation? Right now, there isn't much connection. They were two alternative theories for how we would get neural networks to run. In the early days, I managed to combine them by using Boltzmann machines to "pretrain" backpropagation networks. But people aren't doing that anymore. What you mean by pretrain? How long do you have? Can you explain in language that the readers of The Times would understand? I am reminded of what the physicist Richard Feynman said when he received the Nobel Prize. A journalist asked him, "Professor Feynman, can you explain -- in just a couple of minutes -- what you won the Nobel Prize for?" Feynman apparently replied, "Listen, buddy, if I could explain it in a couple of minutes, it wouldn't be worth the Nobel Prize." Sorry. The BBC is calling again. Bye. Hello, again. It is safe to say the Boltzmann machine was a dead end for A.I. -- that the research went elsewhere? I think of that idea as something like an enzyme. An enzyme gets you over a barrier -- even if it isn't part of the final solution. Boltzmann machines were like an enzyme. It got us over the barrier of "How do you train deep neural networks?" It made it easier to train them. And once we had learned how to do that, we didn't need the Boltzmann machine anymore. Did you work directly with John Hopfield on any of these ideas? No. I read his papers. But one of my main collaborators, Terry Sejnowski, worked with Hopfield and did his Ph.D. with Hopfield. Is it odd that you have received this award for physics? If there was a Nobel Prize for computer science, our work would clearly be more appropriate for that. But there isn't one. That is a great way of putting it. It is also a hint. Yes, perhaps we need a Nobel for computer science. In any case, you have won a Nobel for helping to create a technology that you now worry will cause serious danger for humanity. How do you feel about that? Having the Nobel Prize could mean that people will take me more seriously. Take you more seriously when you warn of future dangers? Yes.
[27]
Nobel Prizes Give Google the Glow It Craves
An executive from a large technology firm has just won a Nobel. On Thursday, the top prize for chemistry went to the head of Alphabet Inc.'s AI efforts, Demis Hassabis, along with two other key scientists, for a years-long project that used artificial intelligence to predict the structure of proteins. The day before, Geoffrey Hinton, a former executive at Google who's been called a godfather of AI, won the Nobel prize for physics along with physicist John Hopfield, for work on machine learning. It seems the Nobel Foundation is eager to mark AI advancements -- and the notion that key scientific problems can be solved computationally -- as worthy of its coveted prizes. That will be a reputational boon for firms like Google and executives like Hassabis. But there's a risk too that such recognition obscures concerns about both the technology itself and the increasing concentration of AI power in a handful of companies.
[28]
Parmy Olson: Nobel prizes give Google the glow it craves
By Parmy Olson, Bloomberg Opinion The Tribune Content Agency An executive from a large technology firm has just won a Nobel. On Thursday, the top prize for chemistry went to the head of Alphabet Inc.'s AI efforts, Demis Hassabis, along with two other key scientists, for a years-long project that used artificial intelligence to predict the structure of proteins. The day before, Geoffrey Hinton, a former executive at Google who's been called a godfather of AI, won the Nobel prize for physics along with physicist John Hopfield, for work on machine learning. It seems the Nobel Foundation is eager to mark AI advancements - and the notion that key scientific problems can be solved computationally - as worthy of its coveted prizes. That will be a reputational boon for firms like Google and executives like Hassabis. But there's a risk too that such recognition obscures concerns about both the technology itself and the increasing concentration of AI power in a handful of companies. Hassabis himself has long craved this accolade, having told staff for years that he wanted DeepMind, the AI lab he co-founded and sold to Google in 2015, to win between three and five Nobel Prizes over the next decade. At a press conference on Wednesday, he called the award "an unbelievable honor of a lifetime" and said he'd been hoping to win it this time around. Indeed, he initially shaped DeepMind as a research lab with utopian objectives, where many of its leading scientists worked on building AI systems to help cure diseases like cancer or solve global warming. But that humanitarian agenda faded to the background after the sale to Google and especially after the release of OpenAI's ChatGPT, which sparked a race among tech giants to deploy chatbot-style technology to businesses and consumers. DeepMind has since become more product-focused (information about its healthcare and climate efforts disappeared from its homepage, for example), though it has continued with health-related efforts like AlphaFold. Out of DeepMind's roughly 1,500-strong workforce, a team of just two dozen people were running the protein-folding project when it reached a critical milestone in 2020, according to a video documentary about the effort. The Nobel will surely give Hassabis a credibility boost at Alphabet, where he has been leading the company's fraught efforts to keep up with OpenAI. Google's flagship AI model Gemini has grappled with controversies over its frequent mistakes and the possibility it will choke off traffic to the rest of the web. Now perhaps a smoother path has been paved for Hassabis if he wants to become Alphabet's next chief executive. The former chess champion is a consummate strategist and rivals Sam Altman as the world's most successful builder of AI technology, having pushed the boundaries of fields like deep learning, reinforcement learning and games-based models such as AlphaGo, which beat world champion Go players eight years ago. Hassabis was already talking about taking on protein folding during those matches. The glow benefits Google, too. Recent challenges from antitrust regulators over monopolistic behavior haven't helped its reputation as a company founded on the principle of "don't be evil." Now with two Nobel prizes linked to work done by its scientists, the tech giant can more easily frame itself as providing services that are ultimately good for society, as its lawyers have been arguing, and perhaps generate goodwill more broadly with the public and regulators. But we shouldn't forget the tension between the high-minded goals professed by Big Tech and what their businesses are really focused on. Google, which derives close to 80% of its revenue from advertising, is now putting ads into its new AI search tool. For businesses, that invites a new layer of complexity to online advertising, while consumers face the prospect of wading through AI-generated information that Google is trying to monetize and which could one day become more biased toward advertisers. Remember also that Google's prioritization of human wellbeing was called into question less than three years ago when it fired two leading AI ethics experts who'd warned about the risks that its AI models could entrench bias, spread misinformation and hoard energy, issues that haven't gone away. A September study in Nature, for instance, showed that AI tools like ChatGPT were making racist decisions about people based on their dialect. The Nobel Prize is designed to recognize people who've made outstanding contributions to science, humanism and peace, so the Foundation behind it has taken a bold stance in validating the work of AI and of one company in particular. The award to Hassabis - like the peace prize given to Barack Obama one year after he was elected as president - feels a little premature. It's still unclear what kind of broad, real-world impact DeepMind's protein-folding project will have on the medical field and drug discovery. Let's hope the prize motivates well-endowed technology firms to invest much more in using AI for public service efforts like protein folding and in AI ethics research - and doesn't muddy the debate over the very real risks that AI poses to the world too. ____ This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of "We Are Anonymous." _____
[29]
Google triumphs on the Nobel stage as tough antitrust fight looms
The two Google scientists won their Nobels a day after Geoffrey Hinton, a former Google vice president and researcher, was one of two winners of the Nobel Prize in physics for his pioneering work on artificial intelligence. But the triumphant moment for Google was tempered by concerns that the commercial success that has allowed the company to pursue these long-term projects is under threat by antitrust regulators. Google, thanks to the tens of billions of dollars it makes every year from its online search business, has long pursued giant research projects that could one day change the world. On Wednesday, the Nobel Prize committee conferred considerable prestige to Google's pursuit of big ideas. Demis Hassabis, the CEO of Google's primary artificial intelligence lab, and John Jumper, one of the lab's scientists, were among a trio of researchers who received the Nobel Prize in chemistry for their efforts to better understand the human body and fight disease through AI. The two Google scientists won their Nobels a day after Geoffrey Hinton, a former Google vice president and researcher, was one of two winners of the Nobel Prize in physics for his pioneering work on artificial intelligence. The Nobel wins were a demonstration of the growing role artificial intelligence is playing in areas far beyond the traditional world of the high-tech industry, and were a reminder of Silicon Valley's influence in nearly every corner of science and the economy. "This is the year the Nobel committee got AI," said Oren Etzioni, a professor emeritus of computer science at the University of Washington. "These prizes are a conscious recognition of how influential AI has become in the scientific world." But the triumphant moment for Google was tempered by concerns that the commercial success that has allowed the company to pursue these long-term projects is under threat by antitrust regulators. The Nobel awards were also a reminder of worries that the tech industry isn't paying enough attention to the implications of its open-throttled pursuit of building more powerful AI systems. "We might find ourselves in a situation in which not only the solutions but even the questions being asked are actually being provided by the AI," said Mohammed AlQuraishi, a Columbia University biologist. "It's going to be very interesting navigating that as scientists and as humans." On Tuesday evening, the Justice Department said it could ask a federal court to force Google into breaking off parts of the company or change how it operates in order to eliminate its monopoly in online search. Google is also facing off with the Justice Department in a Virginia federal court over claims that it broke antitrust laws to dominate the technology that places ads on websites. Closing arguments in that case are expected next month. And on Monday, a federal judge in California ordered Google to let other companies place app stores on its Android operating system for three years as part of a third antitrust case. Google is not the only big tech company getting squeezed by regulators. The Justice Department has also sued Apple, arguing that the company makes it tough for customers to ditch its suite of devices and software. The Federal Trade Commission has filed antitrust lawsuits against Meta, saying it snuffed out competition when it bought Instagram and WhatsApp; and Amazon, arguing the company's practices artificially inflate prices for products online. As the largest tech companies fight off concerns over monopolist behavior, they are going all-in on AI -- so much so that regulators are arguing that the companies must be reined in now before they use their power to take control of the young market for AI systems. "AI is coming to chemistry and going to Washington," said Erik Brynjolfsson, director of the Stanford Digital Economy Lab. "You may not be interested in AI but AI is interested in you." In its Tuesday court filing, the Justice Department said it believed that any efforts to tame Google's search monopoly should take into account its ability to "leverage its monopoly power to feed artificial intelligence features." The Justice Department said it was considering asking the U.S. District Court for the District of Columbia, which in August agreed with the government that Google abused a search monopoly, to take steps to limit Google's power in the new technology, including allowing websites to opt out of having their content used in the development of Google's artificial intelligence systems. The FTC and the Justice Department this year reached a separate deal clearing the way for them to investigate other companies focused on AI development. The Justice Department has opened an inquiry into Nvidia, which makes computer chips essential to the technology, while the FTC will be responsible for investigating Microsoft and its partner, the San Francisco company OpenAI. (BEGIN OPTIONAL TRIM.) (The New York Times sued OpenAI and Microsoft in December over copyright infringement of news content related to AI systems.) In the early 1960s, when computer science was emerging as a field, the standard put down was that any academic discipline that put "science" in its name wasn't one. A computer, skeptics said, was a mere tool like a test tube or a microscope. But as the technology has progressed, accelerated by recent advances in artificial intelligence, computer science has become a driving force behind discoveries across the sciences -- in astronomy, biology, chemistry, medicine and physics. "Chatbots are how most people know AI, but the technology's ability to speed scientific discovery is much more important," Brynjolfsson said. (END OPTIONAL TRIM.) After OpenAI released its ChatGPT chatbot in late 2022, igniting an industrywide AI boom, some researchers turned up the volume on their concerns about how the technology could be used. Hinton left Google, using his retirement as an opportunity to speak freely about his worry that the race toward AI could one day be catastrophic. He said Tuesday that he hoped "having the Nobel Prize could mean that people will take me more seriously." Leading researchers such as Hassabis often describe artificial intelligence as a way to cure disease, battle climate change and solve other scientific mysteries that have long bedeviled the world's researchers. The work that won a Nobel was a significant step in that direction. DeepMind, Google's main AI lab, created technology called AlphaFold that can rapidly and reliably predict the physical shape of proteins -- the microscopic mechanisms that drive the behavior of the human body and all living things. By pinpointing protein structures, scientists can more quickly develop medicines and vaccines and tackle other scientific problems. In 2012, Hinton, then a professor at the University of Toronto, published a research paper with two of his graduate students that demonstrated the power of an AI technology known as a neural network. Google paid $44 million to bring them to the company. About a year later, Google paid $650 million for Hassabis' four-year-old startup, DeepMind, which specialized in the same kind of technology. Hinton and Hassabis were part of a small academic community that had nurtured neural networks for years while the rest of the world had largely ignored it. Hinton, 76, liked to call Hassabis, 48, his "grand-post-doc" because he had overseen the postdoctoral work of the academic who later oversaw Hassabis' research. Hassabis also worries that AI could cause a range of problems or even threaten humanity if it is not carefully controlled. But he thinks that staying with a company is the best way to make sure its AI doesn't cause problems. A Google spokesperson, Jane Park, said in a statement Wednesday, "As a field, we have to proceed with cautious optimism and engage in a conversation with wider society about the risks in order to mitigate them, and unlock AI's incredible ability to accelerate scientific discovery." When Google acquired DeepMind, Hassabis and his co-founders asked for assurances that Google would not use DeepMind's technologies for military purposes and that it would establish an independent board that would work to ensure that its technologies were not misused. "Of course it's a dual-purpose technology," Hassabis said during a news conference after winning the Nobel Prize. "It has extraordinary potential for good, but also it can be used for harm."
[30]
Google Triumphs on the Nobel Stage as Tough Antitrust Fight Looms
Google, thanks to the tens of billions of dollars it makes every year from its online search business, has long pursued giant research projects that could one day change the world. On Wednesday, the Nobel Prize committee conferred considerable prestige to Google's pursuit of big ideas. Demis Hassabis, the chief executive of Google's primary artificial intelligence lab, and John Jumper, one of the lab's scientists, were among a trio of researchers who received the Nobel Prize in Chemistry for their efforts to better understand the human body and fight disease through A.I. The two Google scientists won their Nobels a day after Geoffrey Hinton, a former Google vice president and researcher, was one of two winners of the Nobel Prize in Physics for his pioneering work on artificial intelligence. The Nobel wins were a demonstration of the growing role artificial intelligence is playing in areas far beyond the traditional world of the high-tech industry, and were a reminder of Silicon Valley's influence in nearly every corner of science and the economy. "This is the year the Nobel committee got A.I.," said Oren Etzioni, a professor emeritus of computer science at the University of Washington. "These prizes are a conscious recognition of how influential A.I. has become in the scientific world." But the triumphant moment for Google was tempered by concerns that the commercial success that has allowed the company to pursue these long-term projects is under threat by antitrust regulators. The Nobel awards were also a reminder of worries that the tech industry isn't paying enough attention to the implications of its open-throttled pursuit of building more powerful A.I. systems. "We might find ourselves in a situation in which not only the solutions but even the questions being asked are actually being provided by the A.I.," said Mohammed AlQuraishi, a Columbia University biologist. "It's going to be very interesting navigating that as scientists and as humans." On Tuesday evening, the Justice Department said it could ask a federal court to force Google into breaking off parts of the company or change how it operates in order to eliminate its monopoly in online search. Google is also facing off with the Justice Department in a Virginia federal court over claims that it broke antitrust laws to dominate the technology that places ads on websites. Closing arguments in that case are expected next month. And on Monday, a federal judge in California ordered Google to let other companies place app stores on its Android operating system for three years as part of a third antitrust case. Google is not the only big tech company getting squeezed by regulators. The Justice Department has also sued Apple, arguing that the company makes it tough for customers to ditch its suite of devices and software. The Federal Trade Commission has filed antitrust lawsuits against Meta, saying it snuffed out competition when it bought Instagram and WhatsApp; and Amazon, arguing the company's practices artificially inflate prices for products online. As the largest tech companies fight off concerns over monopolist behavior, they are going all-in on A.I. -- so much so that regulators are arguing that the companies must be reined in now before they use their power to take control of the young market for A.I. systems. "A.I. is coming to chemistry and going to Washington," said Erik Brynjolfsson, director of the Stanford Digital Economy Lab. "You may not be interested in A.I. but A.I. is interested in you." In its Tuesday court filing, the Justice Department said it believed that any efforts to tame Google's search monopoly should take into account its ability to "leverage its monopoly power to feed artificial intelligence features." The Justice Department said it was considering asking the U.S. District Court for the District of Columbia, which in August agreed with the government that Google abused a search monopoly, to take steps to limit Google's power in the new technology, including allowing websites to opt out of having their content used in the development of Google's artificial intelligence systems. The Federal Trade Commission and the Justice Department this year reached a separate deal clearing the way for them to investigate other companies focused on A.I. development. The Justice Department has opened an inquiry into Nvidia, which makes computer chips essential to the technology, while the F.T.C. will be responsible for investigating Microsoft and its partner, the San Francisco company OpenAI. (The New York Times sued OpenAI and Microsoft in December over copyright infringement of news content related to A.I. systems.) In the early 1960s, when computer science was emerging as a field, the standard put down was that any academic discipline that put "science" in its name wasn't one. A computer, skeptics said, was a mere tool like a test tube or a microscope. But as the technology has progressed, accelerated by recent advances in artificial intelligence, computer science has become a driving force behind discoveries across the sciences -- in astronomy, biology, chemistry, medicine and physics. "Chatbots are how most people know A.I., but the technology's ability to speed scientific discovery is much more important," Mr. Brynjolfsson said. After OpenAI released its ChatGPT chatbot in late 2022, igniting an industrywide A.I. boom, some researchers turned up the volume on their concerns about how the technology could be used. Dr. Hinton left Google, using his retirement as an opportunity to speak freely about his worry that the race toward A.I. could one day be catastrophic. He said on Tuesday that he hoped "having the Nobel Prize could mean that people will take me more seriously." Leading researchers such as Dr. Hassabis often describe artificial intelligence as a way to cure disease, battle climate change and solve other scientific mysteries that have long bedeviled the world's researchers. The work that won a Nobel was a significant step in that direction. DeepMind, Google's main A.I. lab, created technology called AlphaFold that can rapidly and reliably predict the physical shape of proteins -- the microscopic mechanisms that drive the behavior of the human body and all living things. By pinpointing protein structures, scientists can more quickly develop medicines and vaccines and tackle other scientific problems. In 2012, Dr. Hinton, then a professor at the University of Toronto, published a research paper with two of his graduate students that demonstrated the power of an A.I. technology called a neural network. Google paid $44 million to bring them to the company. About a year later, Google paid $650 million for Dr. Hassabis's four-year-old start-up, DeepMind, which specialized in the same kind of technology. Dr. Hinton and Dr. Hassabis were part of a small academic community that had nurtured neural networks for years while the rest of the world had largely ignored it. Dr. Hinton, 76, liked to call Dr. Hassabis, 48, his "grand-post-doc" because he had overseen the post doctoral work of the academic who later oversaw Dr. Hassabis's research. Dr. Hassabis also worries that A.I. could cause a range of problems or even threaten humanity if it is not carefully controlled. But he thinks that staying with a company is the best way to make sure its A.I. doesn't cause problems. When Google acquired DeepMind, Dr. Hassabis and his co-founders asked for assurances that Google would not use DeepMind's technologies for military purposes and that it would establish an independent board that would work to ensure that its technologies were not misused. "Of course it's a dual-purpose technology," Dr. Hassabis said during a news conference after winning the Nobel Prize. "It has extraordinary potential for good, but also it can be used for harm." Teddy Rosenbluth contributed reporting in New York.
[31]
Nobel Prize Goes to 'Godfathers of AI' Who Now Fear Their Work Is Growing Too Powerful
John Hopfield and Geoffrey Hinton developed artificial neural networks that laid the foundation for modern recommendation systems and generative AI. Two AI researchers, John Hopfield and Geoffrey Hinton, received the Nobel Prize in physics on Monday for their work building artificial neural networks that can memorize information and recognize patterns in ways that mimic the human brain. Their research in the 1980s laid the foundation for the last decade’s explosive progress in artificial intelligence and today’s ubiquitous recommendation algorithms and generative AI systems. Both men have since said that progress needs to be constrained for the sake of humanity. In its announcement of the prize, the Nobel committee emphasized how far the field has come since Hopfield published his seminal paper in 1982, in which he described a neural network with fewer than 500 possible parameters. Today, tech companies are churning out generative AI systems with billions and trillions of parameters. The Hopfield network was a collection of 30 interconnected digital nodes that could change their values between 1 and 0 and in doing so be programmed to record patterns that represented the pixels in black and white images. It drew on equations from physics used to describe how atoms in a network affect each others’ spin in order to calculate how the relationships between the nodes in the network represented the images. In effect, the network could be programmed to create memories of certain images. And when it was fed a new image that was fuzzy or incomplete, it could calculate its way back to the most similar image in its memory. Hinton built on Hopfield’s work by designing neural networks that could not just remember and recreate patterns, but could be taught to recognize similar patterns in entirely different dataâ€"for example, the patterns that make one picture of a dog like another but not like a picture of a cat. In 1985 he published a paper introducing this network, named a Boltzmann machine after physicist Ludwig Boltzmann, who developed statistical equations for calculating the collective properties of a network composed of many different components. In 2023, Hopfield was one of the most notable signatories on a letter calling for AI companies to pause the development of generative AI systems more powerful than OpenAI’s GPT-4. And Hinton has recently been talking a lot about his concerns that AI is advancing too rapidly for humans to control. He’s estimated that humans could build an artificial intelligence that exceeds our own intelligence in the next five to 20 years and that “It'll figure out ways of manipulating people to do what it wants.†He was one of several big names in the field to sign an open letter this year calling for California Governor Gavin Newsom to enact a law that would have held large tech companies liable for building AI models that caused catastrophic losses of life or property damage. Newsom ultimately vetoed the bill, under pressure from tech companies, including Hinton’s former employers at Google. “In the next few years we need to figure out if there’s a way to deal with that threat,†Hinton said in an interview with the Nobel Prize committee following the award announcement. “So I think it’s very important right now for people to be working on the issue of how will we keep control. We need to put a lot of research effort into it. I think one thing governments can do is force the big companies to spend a lot more of their resources on safety research.â€
[32]
Google's Nobel Winners Stir Debate Over AI Research
On Wednesday, Demis Hassabis -- co-founder of Google's AI unit DeepMind -- and colleague John Jumper were awarded the Nobel prize in chemistry, alongside U.S. biochemist David Baker, for their work decoding the structures of microscopic proteins. Former Google researcher Geoffrey Hinton, meanwhile, won the Nobel prize for physics on Tuesday, alongside U.S. scientist John Hopfield, for earlier discoveries in machine learning that paved the way for the AI boom.
[33]
'Godfather of AI' wins Nobel Prize for work he fears threatens humanity
'Flabbergasted' Geoffrey Hinton warned that AI might 'take control.' Geoffrey Hinton, one of the so-called 'Godfathers of AI' has won a Nobel Prize for pioneering the very same technology he fears could result in the "end of people." Hinton and fellow AI researcher John J. Hopfield were awarded the Nobel Prize in Physics on Tuesday for their crucial early work on artificial neural networks, which have since formed the foundation for powerful AI models developed by Google, OpenAI, and others. The pair's work, which draws on inspiration from the human brain's architecture, paved the way for advancements in machine learning used in everything from fraud detection to driverless vehicles. In Hinton's view, the tech he helped pioneer may also pose a profound risk to human safety. [ Related: 'Godfather of AI' quits Google to talk openly about the dangers of the rapidly emerging tech ] Hopfield and Hinton are credited with advancing the study of neural networks in the 1970s and '80s during a time when it was still unclear that the field would mature into the behemoth it is today. Hopfield is known for combining findings from psychics, biology, and neuroscience to create a network -- the "Hopfield network" -- capable of saving and recreating patterns from data. Hinton later built off of this to create his own network called the Boltzmann machine which can identify patterns in large masses of data. Together, these advancements were crucial first steps to eventually creating machines capable of classifying images. That, in turn, would be used in more modern AI models to rapidly learn from images and patterns stored on vast datasets. The award further highlights the growing parallels and interconnectedness between psychics and computer science. Neural networks, which draw inspiration from how the human brain uses neurons to take in new information, have since gone on to form the underlying technological basis for large language models like ChatGPT, as well as image recognition models used in everything from cancer screening to facial recognition. Hinton and several of his colleagues previously received the Turing Award for their work on neural networks. In a statement describing its decision, the Nobel committee credited Hopfield and Hinton's work introducing new "way[s] for us to use computers" to solve challenging societal questions. "Thanks to their work humanity now has a new item in its toolbox, which we can choose to use for good purposes," the Nobel Committee wrote on X, the social media platform formerly called Twitter. "Machine learning based on artificial neural networks is currently revolutionising science, engineering and daily life." Hinton, who has previously referred to modern AI tech as an "existential threat" to humanity, appeared surprised by the Royal Swedish Academy of Science's decision speaking with The Washington Post Tuesday morning. "I'm in a cheap hotel in California which doesn't have a good internet or phone connection. I was going to get an MRI scan today, but I think I'll have to cancel that," Hinton said. He went on to voice caution around future AI development, though his tone was markedly less pessimistic than some of his previous statements. "It's going to be wonderful in many respects," Hinton told the Post, "It'll mean huge improvements in productivity. But we also have to worry about a number of possible bad consequences ... I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control." "We have no experience with what it's like to have things smarter than us," he added. Google approached Hinton in 2012 after he and several colleagues achieved a breakthrough in neural network programming. Hinton joined the company as a researcher and vice president and worked to advance their AI efforts. The technology progressed much faster than Hinton was expecting. In previous reports, Hinton said he thought the types of text outputs generated by ChatGPT and other large language models in recent years would not have been possible in his lifetime. The researcher became increasingly uncomfortable with the speed of AI's development, leading him to resign from Google last year so he could criticize the industry more freely. After leaving Google, Hinton joined a chorus of alarmed researchers and technologists who believe the tech industry's rapid rush to create more powerful AI models could have harmful societal side effects. Although he's since tempered his language in recent statements, Hinton previously expressed concerns that an unchecked AI model could, somehow, "take over" humanity. Those fears caused the researcher, known by many in the industry as a "Godfather" of AI, to reportedly regret much of his life's work. "The idea that this stuff could actually get smarter than people -- a few people believed that," Hinton said during a 2023 interview with The New York Times. "But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that." Hinton's new Nobel Prize may further complicate the researcher's already strained relationship with his legacy.
[34]
Nobel Prize in Physics winners say they worry about AI
"I'm hoping AI will lead to tremendous benefits, to tremendous increases in productivity, and to a better life for everybody. I'm convinced that it will do that in healthcare. My worry is that it may also lead to bad things and, in particular, when we get things more intelligent than ourselves. No one really knows whether we're going to be able to control them." "Quite a few good researchers believe that sometime in the next 20 years, AI will become more intelligent than us. And we need to think hard about what happens then." "But I worry about anything which says I'm big, I'm fast, I'm faster than you are, I'm bigger than you are. And I can, I can also run you. Now can you peacefully inhabit with me? I don't know... I worry." Heralded for its revolutionary potential in areas ranging from cutting-edge scientific discovery to more efficient admin, the emerging technology on which the duo worked has also raised fears humankind may soon be outsmarted and outcompeted by its own creation. Hinton has been widely credited as a godfather of AI and made headlines when he quit his job at Google last year to be able to more easily speak about the dangers of the technology he had pioneered. Hopfield, 91, a professor emeritus at Princeton University, created an associative memory that can store and reconstruct images and other types of patterns in data, said the Royal Swedish Academy of Sciences, which awards the prize.
[35]
Nobel-winning physicist 'unnerved' by AI technology he helped create
WASHINGTON (AFP) - A US scientist who won the 2024 Nobel physics prize for his pioneering work on artificial intelligence said Tuesday he found recent advances in the technology "very unnerving" and warned of possible catastrophe if not kept in check. John Hopfield, a professor emeritus at Princeton, joined co-winner Geoffrey Hinton in calling for a deeper understanding of the inner workings of deep-learning systems to prevent them from spiraling out of control. Addressing a gathering at the New Jersey university via video link from Britain, the 91-year-old said that over the course of his life he had watched the rise of two powerful but potentially hazardous technologies - biological engineering and nuclear physics. "One is accustomed to having technologies which are not singularly only good or only bad, but have capabilities in both directions," he said. "And as a physicist, I'm very unnerved by something which has no control, something which I don't understand well enough so that I can understand what are the limits which one could drive that technology." "That's the question AI is pushing," he continued, adding that despite modern AI systems appearing to be "absolute marvels," there is a lack of understanding about how they function, which he described as "very, very unnerving." "That's why I myself, and I think Geoffrey Hinton also, would strongly advocate understanding as an essential need of the field, which is going to develop some abilities that are beyond the abilities you can imagine at present." Hopfield was honored for devising the "Hopfield network" - a theoretical model demonstrating how an artificial neural network can mimic the way biological brains store and retrieve memories. His model was improved upon by British-Canadian Hinton, often dubbed the "Godfather of AI," whose "Boltzmann machine" introduced the element of randomness, paving the way for modern AI applications such as image generators. Hinton himself emerged last year as a poster child for AI doomsayers, a theme he returned to during a news conference held by the University of Toronto where he serves as a professor emeritus. "If you look around, there are very few examples of more intelligent things being controlled by less intelligent things, which makes you wonder whether when AI gets smarter than us, it's going to take over control," the 76-year-old told reporters. Civilizational downfall With the meteoric rise of AI capabilities -- and the fierce race it has sparked among companies -- the technology has faced criticism for evolving faster than scientists can fully comprehend. "You don't know that the collective properties you began with are actually the collective properties with all the interactions present, and you don't therefore know whether some spontaneous but unwanted thing is lying hidden in the works," stressed Hopefield. He evoked the example of "ice-nine" -- a fictional, artificially engineered crystal in Kurt Vonnegut's 1963 novel "Cat's Cradle" developed to help soldiers deal with muddy conditions but which inadvertently freezes the world's oceans solid, causing the downfall of civilization. "I'm worried about anything that says... 'I'm faster than you are, I'm bigger than you are... can you peacefully inhabit with me?' I don't know, I worry." Hinton said it was impossible to know how to escape catastrophic scenarios at present, "that's why we urgently need more research." "I'm advocating that our best young researchers, or many of them, should work on AI safety, and governments should force the large companies to provide the computational facilities that they need to do that," he added.
[36]
Nobel laureate Geoffrey Hinton is both AI pioneer and frontman of alarm
Geoffrey Hinton is a walking paradox -- an archetype of a certain kind of brilliant scientist. Hinton's renown was solidified on Wednesday when he won the Nobel prize for physics alongside the American scientist John Hopfield, for discovering neural networks and the computer pathways that lead to the modern-day breakthroughs in AI. However, in recent years he has come to be defined by the contradiction that the discovery that led to his acclaim is now a source of ceaseless concern. Over the last year, Hinton, dubbed the "godfather of AI," has repeatedly and emphatically warned about the dangers the technology unleashed by his discovery could cause. In his role as both Prometheus and Cassandra, Hinton, like many scientists of legend, was caught between the human desire to achieve and the humanist impulse to reflect on the consequences of one's actions. J. Robert Oppenheimer and Albert Einstein grappled torturously with the destruction their atomic research caused. Alfred Nobel, the inventor of dynamite, became so distraught over what his legacy might be that he started a foundation to award the eponymous prize that Hinton won. "I can't see a path that guarantees safety," Hinton told 60 Minutes in 2023. "We're entering a period of great uncertainty, where we're dealing with things we've never dealt with before." Much of Hinton's worry stems from the belief that humanity knew frighteningly little about artificial intelligence -- and that machines may outsmart humans. "These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening," he said in an interview with NPR. Originally from England, Hinton spent much of his professional life in the U.S. and Canada. It was at the University of Toronto where he reached a major breakthrough that would become the intellectual foundation for many contemporary uses of AI. In 2012, Hinton and two grad students (one of whom was Ilya Sutskever, the former chief scientist at OpenAI) built a neural network that could identify basic objects in pictures. Google eventually bought a company Hinton had started based on the tech for $44 million. Hinton then worked at Google for 10 years before retiring in 2023, to relinquish himself of any corporate constraints that may have limited his ability to warn the public about AI. (Hinton did not respond to a request for comment). Hinton feared the rate of progress in AI as much as anything else. "Look at how it was five years ago and how it is now," Hinton told the New York Times last year. "Take the difference and propagate it forwards. That's scary." Concerning him was the potential for AI models to teach each other new information that only one model may have learned, which could be done with considerably greater efficiency than humans, according to Hinton. "Whenever one [model] learns anything, all the others know it," Hinton said in 2023. "People can't do that. If I learn a whole lot of stuff about quantum mechanics and I want you to know all that stuff about quantum mechanics, it's a long, painful process of getting you to understand it." Among Hinton's more controversial views is that AI can, in fact, "understand" the things it is doing and saying. If true, this fact could shatter much of the conventional wisdom about AI. The consensus is that AI systems don't necessarily know why they're doing what they're doing, but rather are programmed to produce certain outputs based on prompts they are given. Hinton is careful to say in public statements that AI is not self-aware, as humans are. Rather, the learning mechanisms by which AI systems learn, improve, and ultimately produce certain outputs mean they must comprehend that which they're learning. The impetus for Hinton sounding the alarm was when he asked a chatbot to accurately explain why a joke he had made up was funny, according to Wired. That a chatbot could understand the subtleties of humor and then convey them clearly in its own words was revelatory in Hinton's view. As humanity races toward a finish line that virtually none understand, Hinton fears that control of AI may slip through humanity's fingers. He envisions a scenario in which AI systems will write code to alter their own learning protocols and hide from humans. In a Shakespearean twist, they'll have learned how to do so precisely from our own flaws. "They will be able to manipulate people," Hinton told 60 Minutes in October 2023. "They will be very good at convincing people, because they'll have learned from all the novels that were ever written, all the books by Macchiavelli, all the political connivances they'll know all that stuff. They'll know all that stuff."
[37]
'Godfather of AI' Geoffrey Hinton just won a Nobel even though he's now scared of AI
Geoffrey Hinton, the oft-recognized 'Godfather of AI' and now-vocal alarm ringer for an AI-infused future, just won a Noble Prize in Physics for his work in - wait for it - training artificial neural networks using physics. That's right, the brilliant Turing Prize-winning scientist most afraid of how artificial intelligence might harm humanity has won the world's biggest science award for his foundational work in AI. As The Royal Swedish Academy of Sciences (the group that awards the Nobel Prize) describes it, "Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures." Hinton shares his Nobel with John J. Hopfield of Princeton University. Hinton's work built upon Hopfield's breakthrough work where he created a network system that could save and recreate patterns. Combined, their work led to future breakthroughs in Machine Learning (systems that can learn and improve data without programming) and the concept of artificial neural networks, which is often at the core of modern AI. Hinton, who is currently teaching Computer Science at the University of Toronto, has a storied AI history that started with those early breakthroughs and led him to Google's DeepMind where he and his team helped lay the groundwork for today's chatbots like OpenAI's ChatGPT and Google Gemini. However, when Hinton left in 2023, he sounded the alarm, worrying that Google was no longer, as he told The New York Times, "a proper steward" for AI. The warnings ranged from companies going too fast and acting recklessly to AI being responsible for a flood of fake content, gutting the job market, and outthinking us. A year later, it seems like some of those fears are coming true; companies are increasingly employing AI to handle basic writing tasks, our feeds are now flooded with AI-generated content that sometimes includes AI watermarks, but not consistently, and we are racing toward the unknown of General Artificial Intelligence, which may mean computers that can think as well or better than we do. I emailed Hinton for comment on his win and how that affects his thinking about the current state of AI and will update this article when I hear back. Still, it makes sense to honor Hinton for his pioneering work. AI as we know it would probably not exist without Hinton and Hopfield. Applying physics to the problem of pattern recognition was a novel solution that, in some ways, helped computers operate more like the human brain. The concept of neural networks, arguably AI's most powerful tool, would not exist without Hinton. Surely, Hinton's other accomplishment is waking us up to the notion that AI is a double-edged sword. It's a vastly powerful tool that is already changing our lives and it's one that desperately needs guardrails to protect humanity from AI run amuck. Hinton may not have understood what he unleashed when he first developed these concepts in the 1970s but he's now an honored beacon of light and reason in a confusing and fast-moving world of AI.
[38]
Researcher Who Just Won the Nobel Prize Quit Google to Warn About Evil AI Coming for Us All
Both of the men who won this year's Nobel Prize in Physics are artificial intelligence pioneers -- and one of them is considered the technology's "godfather." As Reuters reports, American physicist John Hopfield and AI expert Geoffrey Hinton were awarded the coveted prize this week. Considered the "godfather of AI," Hinton's research in 2012 laid the groundwork for today's neural networks -- but in 2023, he quit his job at Google to join a chorus of critics sounding alarm bells about the technology. In an interview with the New York Times last year about leaving his job as a vice president and engineering fellow at the tech giant, Hinton said he'd previously thought of Google as a "proper steward" of the powerful technology. That's until Microsoft partnered with OpenAI to unleash the latter's GPT-4 large language model (LLM), which powers ChatGPT, onto the masses. Though he didn't believe that AI was anywhere near its zenith at the time, the 76-year-old computer scientist suggested he saw the writing on the wall with the Microsoft-OpenAI deal. "Most people thought it was way off. And I thought it was way off," Hinton told the newspaper at the time. "I thought it was 30 to 50 years or even longer away." "I think it's very reasonable for people to be worrying about these issues now," he told CBS at the time, "even though it's not going to happen in the next year or two." Now a professor emeritus at the University of Toronto, Hinton has made it abundantly clear in the roughly 18 months since his Google departure that he thinks that AI may escape human control at any time -- and once it does, all hell may break loose. "Here we're dealing with something where we have much less idea of what's going to happen and what to do about it," the computer scientist said during a conversation with the Nobel committee. "I wish I had a sort of simple recipe that if you do this, everything's going to be okay. But I don't." Considered the leading AI "doomer" for his grim outlook on the technology he helped birth, Hinton said when speaking to the Nobel committee that he was very surprised to learn he'd won the award and had been unaware that he'd even been nominated. "Hopefully it'll make me more credible," he said of winning the Nobel, "when I say these things really do understand what they're saying."
[39]
Nobel-winning physicist 'unnerved' by AI technology he helped create
A US scientist who won the 2024 Nobel physics prize for his pioneering work on artificial intelligence said Tuesday he found recent advances in the technology "very unnerving" and warned of possible catastrophe if not kept in check. John Hopfield, a professor emeritus at Princeton, joined co-winner Geoffrey Hinton in calling for a deeper understanding of the inner workings of deep-learning systems to prevent them from spiraling out of control. Addressing a gathering at the New Jersey university via video link from Britain, the 91-year-old said that over the course of his life he had watched the rise of two powerful but potentially hazardous technologies -- biological engineering and nuclear physics. "One is accustomed to having technologies which are not singularly only good or only bad, but have capabilities in both directions," he said. "And as a physicist, I'm very unnerved by something which has no control, something which I don't understand well enough so that I can understand what are the limits which one could drive that technology." "That's the question AI is pushing," he continued, adding that despite modern AI systems appearing to be "absolute marvels," there is a lack of understanding about how they function, which he described as "very, very unnerving." "That's why I myself, and I think Geoffrey Hinton also, would strongly advocate understanding as an essential need of the field, which is going to develop some abilities that are beyond the abilities you can imagine at present." Hopfield was honored for devising the "Hopfield network" -- a theoretical model demonstrating how an artificial neural network can mimic the way biological brains store and retrieve memories. His model was improved upon by British-Canadian Hinton, often dubbed the "Godfather of AI," whose "Boltzmann machine" introduced the element of randomness, paving the way for modern AI applications such as image generators. Hinton himself emerged last year as a poster child for AI doomsayers, a theme he returned to during a news conference held by the University of Toronto where he serves as a professor emeritus. "If you look around, there are very few examples of more intelligent things being controlled by less intelligent things, which makes you wonder whether when AI gets smarter than us, it's going to take over control," the 76-year-old told reporters. Civilizational downfall With the meteoric rise of AI capabilities -- and the fierce race it has sparked among companies -- the technology has faced criticism for evolving faster than scientists can fully comprehend. "You don't know that the collective properties you began with are actually the collective properties with all the interactions present, and you don't therefore know whether some spontaneous but unwanted thing is lying hidden in the works," stressed Hopefield. He evoked the example of "ice-nine" -- a fictional, artificially engineered crystal in Kurt Vonnegut's 1963 novel "Cat's Cradle" developed to help soldiers deal with muddy conditions but which inadvertently freezes the world's oceans solid, causing the downfall of civilization. "I'm worried about anything that says... 'I'm faster than you are, I'm bigger than you are... can you peacefully inhabit with me?' I don't know, I worry." Hinton said it was impossible to know how to escape catastrophic scenarios at present, "that's why we urgently need more research." "I'm advocating that our best young researchers, or many of them, should work on AI safety, and governments should force the large companies to provide the computational facilities that they need to do that," he added.
[40]
Nobel-winning physicist 'unnerved' by AI technology he helped create
Nobel laureate John Hopfield expressed alarm at recent AI advancements, highlighting the risks posed by poorly understood deep-learning systems. Alongside co-winner Geoffrey Hinton, they stressed the urgent need for research into AI safety to prevent unforeseen catastrophic consequences, urging government and industry cooperation.A US scientist who won the 2024 Nobel physics prize for his pioneering work on artificial intelligence said Tuesday he found recent advances in the technology "very unnerving" and warned of possible catastrophe if not kept in check. John Hopfield, a professor emeritus at Princeton, joined co-winner Geoffrey Hinton in calling for a deeper understanding of the inner workings of deep-learning systems to prevent them from spiraling out of control. Addressing a gathering at the New Jersey university via video link from Britain, the 91-year-old said that over the course of his life he had watched the rise of two powerful but potentially hazardous technologies -- biological engineering and nuclear physics. "One is accustomed to having technologies which are not singularly only good or only bad, but have capabilities in both directions," he said. "And as a physicist, I'm very unnerved by something which has no control, something which I don't understand well enough so that I can understand what are the limits which one could drive that technology." "That's the question AI is pushing," he continued, adding that despite modern AI systems appearing to be "absolute marvels," there is a lack of understanding about how they function, which he described as "very, very unnerving." "That's why I myself, and I think Geoffrey Hinton also, would strongly advocate understanding as an essential need of the field, which is going to develop some abilities that are beyond the abilities you can imagine at present." Hopfield was honored for devising the "Hopfield network" -- a theoretical model demonstrating how an artificial neural network can mimic the way biological brains store and retrieve memories. His model was improved upon by British-Canadian Hinton, often dubbed the "Godfather of AI," whose "Boltzmann machine" introduced the element of randomness, paving the way for modern AI applications such as image generators. Hinton himself emerged last year as a poster child for AI doomsayers, a theme he returned to during a news conference held by the University of Toronto where he serves as a professor emeritus. "If you look around, there are very few examples of more intelligent things being controlled by less intelligent things, which makes you wonder whether when AI gets smarter than us, it's going to take over control," the 76-year-old told reporters. - Civilizational downfall - With the meteoric rise of AI capabilities -- and the fierce race it has sparked among companies -- the technology has faced criticism for evolving faster than scientists can fully comprehend. "You don't know that the collective properties you began with are actually the collective properties with all the interactions present, and you don't therefore know whether some spontaneous but unwanted thing is lying hidden in the works," stressed Hopefield. He evoked the example of "ice-nine" -- a fictional, artificially engineered crystal in Kurt Vonnegut's 1963 novel "Cat's Cradle" developed to help soldiers deal with muddy conditions but which inadvertently freezes the world's oceans solid, causing the downfall of civilization. "I'm worried about anything that says... 'I'm faster than you are, I'm bigger than you are... can you peacefully inhabit with me?' I don't know, I worry." Hinton said it was impossible to know how to escape catastrophic scenarios at present, "that's why we urgently need more research." "I'm advocating that our best young researchers, or many of them, should work on AI safety, and governments should force the large companies to provide the computational facilities that they need to do that," he added.
[41]
Nobel-winning physicist 'unnerved' by AI technology he helped create
Washington (AFP) - A US scientist who won the 2024 Nobel physics prize for his pioneering work on artificial intelligence said Tuesday he found recent advances in the technology "very unnerving" and warned of possible catastrophe if it isn't kept in check. John Hopfield, a professor emeritus at Princeton, joined co-winner Geoffrey Hinton in calling for a deeper understanding of the inner workings of deep-learning systems to prevent them from spiraling out of control. Addressing a gathering at the New Jersey university via video link from Britain, the 91-year-old said that over the course of his life he had watched the rise of two powerful but potentially hazardous technologies -- biological engineering and nuclear physics. "One is is accustomed to having technologies which are not singularly only good or only bad, but have capabilities in both directions," he said. "And as a physicist, I'm very unnerved by something which has no control, something which I don't understand well enough so that I can understand what are the limits which one could drive that technology." "That's the question AI is pushing," he continued, adding that despite modern AI systems appearing to be "absolute marvels," there is a lack of understanding about how they function, which he described as "very, very unnerving." "That's why I myself, and I think Geoffrey Hinton also, would strongly advocate understanding as an essential need of the field, which is going to develop some abilities that beyond the abilities you can imagine at present." Hopfield was honored for devising the "Hopfield network" -- a theoretical model demonstrating how an artificial neural network can mimic the way biological brains store and retrieve memories. His model was improved upon by British-Canadian Hinton, often dubbed the "Godfather of AI," whose "Boltzmann machine" introduced the element of randomness, paving the way for modern AI applications such as image generators. With the meteoric rise of AI capabilities -- and the fierce race it has sparked among companies -- the technology has faced criticism for evolving faster than scientists can fully comprehend. "You don't know that the collective properties you began with are actually the collective properties with all the interactions present, and you don't therefore know whether some spontaneous but unwanted thing is lying hidden in the works," stressed Hopefield. He evoked the example of "ice-nine" -- a fictional, artificially engineered crystal in Kurt Vonnegut's 1963 novel "Cat's Cradle" developed to help soldiers deal with muddy conditions but which inadvertently freezes the world's oceans solid, causing the downfall of civilization. "I'm worried about anything that says... 'I'm faster than you are, I'm bigger than you are...can you peacefully inhabit with me?' I don't know, I worry."
[42]
'They don't just fall out of trees': Nobel awards highlight Britain's AI pedigree
Recognition for Demis Hassabis and Geoffrey Hinton marks moment when important ingredients came together It was more than even the most ardent advocates expected. After all the demonstrations of superhuman prowess, and the debates over whether the technology was humanity's best invention yet or its surest route to self-destruction, artificial intelligence landed a Nobel prize this week. And then it landed another. First came the physics prize. The American John Hopfield and the British-Canadian Geoffrey Hinton won for foundational work on artificial neural networks, the computational architecture that underpins modern AIs such as ChatGPT. Then came the chemistry prize, with half handed to Demis Hassabis and John Jumper at Google DeepMind. Their AlphaFold program solved a decades-long scientific challenge by predicting the structure of all life's proteins. That artificial intelligence won two Nobels in as many days is one thing. That both honoured British researchers in a field previous ignored by the Nobels is another. Both Hinton and Hassabis were born in London, albeit nearly three decades apart. The watershed moment raises an obvious question: where did it all go right? And more importantly, will it go wrong? Experts in the field do not credit any particular moment, any particular decision, that ensured Britain's pedigree in artificial intelligence - a technology that can be loosely defined as computer systems performing tasks that typically require human intelligence. But there were important ingredients that came together and set the stage for what happened in Stockholm this week. The foundations were shaped over centuries. The UK was a serious player in statistics, logic, mathematics and engineering - think Thomas Bayes, George Boole, Charles Babbage, Ada Lovelace - long before Alan Turing asked: "Can machines think?" As computers became an established technology, expertise flourished at a handful of centres. "The UK has for a long time been a leader in computing science and in AI," says Dame Muffy Calder, vice-principal and head of the college of science and engineering at the University of Glasgow. "We've led for years and years and I put that down in part to the funding environment that we've had in the past that recognised so-called discovery-led research." Unlike research that focuses on cracking a well-defined problem, the research Calder refers to is more speculative. Both AI and quantum technologies have benefited from such work, Calder says, some after decades of support. "That's the message. You've got to keep funding ideas from the beginning," she said. "It can't be all innovation-focused or challenge-focused. The Turing machine? There was no application for the Turing machine when Alan Turing came up with it." Maneesh Sahani, professor of theoretical neuroscience and machine learning, and director of the Gatsby Computational Neuroscience Unit at University College London, highlights how clusters of smart people cropped up across the UK and created a critical mass of expertise. "Britain as a whole has for a long time punched above its weight and I think that's still true," he says. Referring to the machine learning process where instead of being instructed directly, computers "learn" by analysing patterns in data and then making informed decisions, he adds: "But it was really machine learning that the UK got behind very strongly. And that was not because of any central decision. It's one of those things where good people emerged at a similar time." Among the early key groups to make an impact were Edinburgh, Cambridge and Aston Universities, all of which remain strong today. But the momentum Sahani mentions spawned further clusters. His unit at UCL is one of them and its history gives a sense of how these nodes attract and propel expertise. The Gatsby Unit was set up by Hinton, who after studying at Cambridge and Edinburgh spent most of his career in Toronto. Sahani returned to the UK for a post at the Gatsby, where Hassabis, who went on to set up DeepMind, did his postdoctoral research. "The Gatsby was a phenomenal draw," Sahani says. The funding from the Gatsby foundation, a charity set up by supermarket heir David Sainsbury, allows the scientists to focus on research without the same demands for teaching and grant chasing that occupy academics elsewhere. "It's like a chain reaction," Sahani says. "When you've got the critical mass, when you've got people who are doing exciting things and talking to each other, others want to show up and be part of that." AI suffered boom and bust cycles for decades, but the machine learning revolution, driven by multi-layered neural networks crunching massive datasets on processors built for gaming, has galvanised investors. The surge in funding, from companies and nations that cannot risk being left behind, has transformed the landscape, with tech firms, primarily in the US, now dominating AI research. "It's difficult, increasingly difficult, to be competitive, and that's not only with universities in other countries but with industry," says Sahani. "The UK doesn't have quite the disproportionate presence that it had 10 or 15 years ago. And that's not because we went backwards, it was because everybody else invested and did a lot of catching up." Universities cannot hope to compete with the vast computing resources available to Google and other big tech firms, their massive datasets to feed AI models, or the salaries they can offer. Dame Wendy Hall, a professor of computer science at the University of Southampton and a member of the UN's advisory body on AI, says the priority for the UK must be to protect its "academic legacy" in the technology. "It is so important we keep our foot on the pedal of funding AI research in our universities. This is where future generations of AI technologies will come from and we need the high-level skills to support our growing AI industry" she says. "Other countries are deeply envious. It takes 20 years or more to grow a research star like Hassabis. They don't just fall out of the trees." Sahani believes more centres like the Gatsby unit, where researchers can focus purely on their research, and a willingness among funders to pick winners and back them, will help the UK in the race ahead. Calder says tight relationships between universities and tech firms are essential for both to flourish, while the UK should make better use of its sovereign assets, such as NHS health data. "We need to look at the resources we have," she says. Are more Nobels on the horizon? That will come down to individuals as well as the working environments around them. "What stands out about Geoff is his creativity and insatiable curiosity. He goes after all sorts of different problems," says Sahani. "With Demis, what was evident when he was here was his dynamism. He had sense there were great things to be built and he was going to go after them."
[43]
Geoffrey Hinton, AI pioneer and figurehead of doomerism, wins Nobel Prize
Speaking on the phone to the Academy minutes after the announcement, Hinton said he was flabbergasted: "I had no idea this would happen. I'm very surprised." Hinton shares the award with fellow computer scientist John Hopfield, who invented a type of pattern-matching neural network that could store and reconstruct data. Hinton built on this technology, known as a Hopfield network, to develop backpropagation, an algorithm that lets neural networks learn. Hopfield and Hinton borrowed methods from physics, especially statistical techniques, to develop their approaches. In the words of the Nobel Prize committee, the pair are recognized "for foundational discoveries and inventions that enable machine learning with artificial neural networks." But since May 2023, when MIT Technology Review helped break the news that Hinton was now scared of the technology that he had helped bring about, the 76-year-old scientist has become much better known as a figurehead for doomerism -- the idea there's a very real risk that near-future AI could produce catastrophic results, up to and including human extinction. Doomerism wasn't new, but Hinton -- who won the Turing Award, the top prize in computing science, in 2018 -- brought new credibility to a position that many of his peers once considered kooky. What led Hinton to speak out? When I met with him in his London home last year, Hinton told me that he was awestruck by what the latest large language models could do. OpenAI's latest flagship model, GPT-4, had been released a few weeks before. Based on what Hinton had seen, he was now convinced that such technology -- based on deep learning -- would quickly become smarter than humans. And he was worried about what motivations it would have when it did. "I have suddenly switched my views on whether these things are going to be more intelligent than us," he told me at the time. "I think they're very close to it now and they will be much more intelligent than us in the future. How do we survive that?" Hinton's views set off a months-long media buzz and made the kind of existential risks that he and others were imagining (from economic collapse to genocidal robots) into mainstream concerns. Hundreds of top scientists and tech leaders signed open letters warning of the potential catastrophic downsides of artificial intelligence. A moratorium on AI development was floated. Politicians assured voters they would do what they could to prevent the worst. Despite the buzz, many consider Hinton's views to be fantastical. Yann Lecun, chief scientist at Meta AI and Hinton's fellow recipient of the 2018 Turing Award, has called doomerism "preposterously ridiculous." Today's prize rewards foundational work in a technology that has become part of everyday life. It is also sure to shine an even brighter light on Hinton's more scare-mongering opinions.
[44]
Godfather of AI wins Nobel Prize - but says he regrets invention
A British scientist known as the "Godfather of AI" has won the Nobel Prize for physics - but has regrets about his invention. Prof Geoffrey Hinton, who was born in London and studied at the University of Cambridge, shared the honour with Prof John Hopfield of Princeton University. The 76-year-old was staying in a"cheap hotel in California" when he received the early morning phone call informing him of his award, while Chicago-born Prof Hopfield, 91, was in a thatched cottage in England. The Royal Swedish Academy of Sciences recognised their work using "fundamental concepts" to design artificial neural networks. The work, carried out in the 1980s, which involved inventing a method that can autonomously find properties in data and identify specific elements in pictures, was foundational to the modern AI increasingly dominating the world today. Prof Hinton is now at the University of Toronto after a decade working at Google. He resigned from his role at the tech giant last year so he could speak out more openly about what he considers the "existential risk" posed to humanity by AI. The academic, who has previously sounded the alarm over the risks of AI, said that he had some regrets about introducing the technology to the world. "There's two kinds of regret," he said. "There is the kind where you feel guilty because you do something you know you shouldn't have done, and then there's regret where you do something you would do again in the same circumstances but it may in the end not turn out well. "That second regret I have. In the same circumstances I would do the same again but I am worried that the overall consequence of this is that systems more intelligent than us eventually take control. "We have no experience of what it is like to have things that are smarter than us." The technology will, he believes, provide superior healthcare and lead to "huge improvements" in productivity and efficiency. However, he warns the technology could also pose a significant risk to humanity. "I think it will have a huge influence [on our civilisation]," Prof Hinton said. "It will be comparable with the industrial revolution. But instead of exceeding people in physical strength it is going to exceed people in intellectual ability. "We have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control," he told the Nobel committee over the phone on Tuesday. The news of his victory, he said, came as "a bolt out of the blue", leaving him "flabbergasted". Prof Hinton is now just the second person in history to win a Nobel Prize and the Turing Award, often dubbed the Nobel for computing. The Nobel prize comes with a share of 11 million Swedish Krona (£811,000). "I was going to get an MRI scan today, but I think I will have to cancel that," he said. Adrian Smith, president of the Royal Society, said: "I offer my warmest congratulations to Prof Hinton who is distinguished for his work on artificial neural nets. "He has compared the effects of brain damage with effects of losses in such a net, and found striking similarities with human impairment, such as for recognition of names and losses of categorisation. This may well be the start of autonomous intelligent brain-like machines."
[45]
Google's Glory: How About a Nobel Prize in Competition
Two of its scientists get the prize in Chemistry and Google gets an antitrust complaint from the law firm of Biden, Harris, & Garland. So how is Google rewarded for incubating an artificial intelligence research laboratory, DeepMind, for which two of its scientists, Demis Hassabis and John Jumper, were just announced to have won, along with a third scientist from University of Washington, the 2024 Nobel Prize in Chemistry? The Justice Department, in a proposed remedy framework issued on Tuesday, disclosed that it is considering forcing the tech company to break up its parts. That's just one of the solutions that the DOJ proposed to solve the company's iron grip on online search. In August, a district judge, Amit Mehta, determined that Google had built an illegal monopoly. The ruling, as our Novi Zhukovsky writes, marked a "major win for antitrust enforcers who have been working for years to curb the dominance of big tech companies, particularly as the artificial intelligence business has continued to expand." Judge Mehta wrote that Google achieved its market dominance not by "happenstance" but by creating "the industry's highest quality search engine, which has earned Google the trust of hundreds of millions of daily users." Still, however, Judge Mehta ruled that the company's monopoly was illegal. Google, a Wall Street Journal editorial muses, "makes for a very strange monopolist -- one that does better by consumers because its search engine is superior." The Justice Department, in a 32-page brief, says it is "considering behavioral and structural remedies" to curb Google from using such products as "Chrome, Play, and Android" to advantage the company "over rivals or new entrants." The proposal marks the most significant effort by Washington to limit the dominance of big tech companies since the Justice Department's attempt to do so with Microsoft two decades ago. At Google's DeepMind laboratory, the two laureates used artificial intelligence to predict the structure of millions of proteins. A third scientist draped with glory, David Baker of the University of Washington, invented a new protein through computer software. AI "is changing the way we do science," claims an earlier recipient of the Nobel in Chemistry, Frances Arnold. "It is supercharging our ability to explore previously intractable problems." The irony is marked by our Ira Stoll in his Substack "The Editors." He writes that "It's enough to make you wonder what's wrong with Washington. The Swedes at least are shrewd enough to reward excellence with prizes; in America, it can sometimes seem as if we punish success. Instead of a prize medal and a white-tie dinner in Stockholm" he writes, "you can expect a lawsuit instead from Merrick Garland's Justice Department." "This is a bipartisan problem," Mr. Stoll reckons. "While the Biden Justice Department has been pursuing Google, President Trump and Senator Vance have both also been repeatedly and openly publicly critical of what they deride as Big Tech." While Mr. Stoll notes that "Big Tech" is not perfect, "politicians might want to be a little more hesitant before demonizing or crushing one of America's most winning industries." Google, for its part, is expected to fight any attempt at a breakup. The company's vice president of regulatory affairs, Lee-Anne Mulholland, calls the idea "radical" and claims it goes "well beyond the legal scope of the Court's decision about Search distribution contracts." An enforced breakup, she writes, would have "significant unintended consequences for consumers, businesses, and American competitiveness." "We'll rue the regulatory onslaught," Mr. Stoll writes at "The Editors," yet "we'll also celebrate that the prizewinner, Google, is an American company." He contends that "it says something about the power and durability of American capitalism that a company like Google can flourish even despite a hostile public-policy environment." Mr. Hassabis, after receiving his honor, called his project the "first proof point of AI's incredible potential to accelerate scientific discovery." Google's Nobel, Mr. Stoll adds, "is a reminder that universities aren't the only places where significant discoveries happen." The Justice Department's antitrust lawyers," he says, "profess to be worried about Google being a monopoly." In a "larger sense," though, "what Google and similar innovative companies are doing" is "providing healthy competition to academia's monopoly on scientific research and discovery. That's prizeworthy."
[46]
Geoffrey Hinton, soft-spoken godfather of AI
For a brief moment in spring last year, the bird-like features of bespectacled British-born researcher Geoffrey Hinton were poking out from TV screens across the world. Hinton, a big name in the world of artificial intelligence but largely unknown outside it, was warning that the technology he had helped to create -- for which he was awarded the 2024 Nobel Prize -- could pose an existential threat to humanity. "What do you think the chances are of AI wiping out humanity," a reporter from the US network CBS News asked in March last year. "It's not inconceivable," replied Hinton, making a very British understatement. A few weeks later, he had walked away from his job at Google and was giving interviews to media across the world, quickly becoming the poster-child for AI doomsayers. Difficult family life Hinton, a 76-year-old soft-spoken career academic, was born in London, raised in Bristol and went to the universities of Cambridge and Edinburgh. He has described his early life as a high pressure existence, trying to live up to the expectations of a family with an illustrious history, littered with storied scientists. Even his father was a member of the Royal Society. He told Toronto Life magazine he had struggled with depression his whole life and work was a way of releasing the pressure. But Hinton has rarely been able to fully escape into his work. His first wife died from cancer shortly after the couple had adopted their two children in the early 1990s, thrusting him into the role of single parent. "I cannot imagine how a woman with children can have an academic career," he told Toronto Life. "I'm used to being able to spend my time just thinking about ideas... But with small kids, it's just not on." 'Utterly correct' After spending time in universities in the United States in the late 1970s and 1980s, Hinton relocated to Toronto in 1987, his base ever since. Hinton, a self-professed socialist who recalls his family stuffing envelopes for the British Labour Party, had been unwilling to accept funding from the US military, which was the biggest funder for his kind of research. The Canadian government agreed to back his research, which attempted to replicate the functioning of the human brain by engineering artificial "neural networks". Although he spent years on the academic fringes, a research community grew up around him in the Canadian city, and eventually his vision came to dominate the field. And then Google came knocking. He took a job with the Silicon Valley juggernaut in 2013 and suddenly became one of the central figures in the emerging industry. As competition ramped up, many of his students took posts in companies including Meta, Apple and Uber. Ilya Sutskever, who founded OpenAI, worked in Hinton's team for years and has described the time as "critical" for his career. He told Toronto University's website in 2017 they pursued "ideas that were both highly unappreciated by most scientists, yet turned out to be utterly correct". But Sutskever and Hinton have emerged as prominent worriers about the technology -- Sutskever was pushed out of OpenAI for raising concerns about their products a year after Hinton exited Google. And true to form, even during his acceptance speech for the Nobel Prize -- he received the news in a "cheap hotel in California" -- Hinton was still talking of regret rather than success. "In the same circumstances, I would do the same again," he said. "But I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control."
[47]
Geoffrey Hinton, soft-spoken godfather of AI
Paris (AFP) - For a brief moment in spring last year, the bird-like features of bespectacled British-born researcher Geoffrey Hinton were poking out from TV screens across the world. Hinton, a big name in the world of artificial intelligence but largely unknown outside it, was warning that the technology he had helped to create -- for which he was awarded the 2024 Nobel Prize -- could pose an existential threat to humanity. "What do you think the chances are of AI wiping out humanity," a reporter from the US network CBS News asked in March last year. "It's not inconceivable," replied Hinton, making a very British understatement. A few weeks later, he had walked away from his job at Google and was giving interviews to media across the world, quickly becoming the poster-child for AI doomsayers. Difficult family life Hinton, a 76-year-old soft-spoken career academic, was born in London, raised in Bristol and went to the universities of Cambridge and Edinburgh. He has described his early life as a high pressure existence, trying to live up to the expectations of a family with an illustrious history, littered with storied scientists. Even his father was a member of the Royal Society. He told Toronto Life magazine he had struggled with depression his whole life and work was a way of releasing the pressure. But Hinton has rarely been able to fully escape into his work. His first wife died from cancer shortly after the couple had adopted their two children in the early 1990s, thrusting him into the role of single parent. "I cannot imagine how a woman with children can have an academic career," he told Toronto Life. "I'm used to being able to spend my time just thinking about ideas... But with small kids, it's just not on." 'Utterly correct' After spending time in universities in the United States in the late 1970s and 1980s, Hinton relocated to Toronto in 1987, his base ever since. Hinton, a self-professed socialist who recalls his family stuffing envelopes for the British Labour Party, had been unwilling to accept funding from the US military, which was the biggest funder for his kind of research. The Canadian government agreed to back his research, which attempted to replicate the functioning of the human brain by engineering artificial "neural networks". Although he spent years on the academic fringes, a research community grew up around him in the Canadian city, and eventually his vision came to dominate the field. And then Google came knocking. He took a job with the Silicon Valley juggernaut in 2013 and suddenly became one of the central figures in the emerging industry. As competition ramped up, many of his students took posts in companies including Meta, Apple and Uber. Ilya Sutskever, who founded OpenAI, worked in Hinton's team for years and has described the time as "critical" for his career. He told Toronto University's website in 2017 they pursued "ideas that were both highly unappreciated by most scientists, yet turned out to be utterly correct". But Sutskever and Hinton have emerged as prominent worriers about the technology -- Sutskever was pushed out of OpenAI for raising concerns about their products a year after Hinton exited Google. And true to form, even during his acceptance speech for the Nobel Prize -- he received the news in a "cheap hotel in California" -- Hinton was still talking of regret rather than success. "In the same circumstances, I would do the same again," he said. "But I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control."
[48]
Who is Geoffrey Hinton? AI 'Godfather' Wins Nobel Prize for AI Breakthroughs -- Calls Chatbots 'Quite Scary'
The scientist left Google in 2023 after a decade to warn about the dangers of AI. British-Canadian scientist Geoffrey Hinton has been awarded the first Nobel Prize for AI alongside scientist John Hopfield. Hinton, known as the 'Godfather of AI,' has been an extremely influential figure in AI. He recently spoke out about its threat, calling chatbots "quite scary." Geoffrey Hinton Wins Nobel Prize for AI The Nobel Prizes were awarded to Hinton, 76, and John Hopfield, 91, by the Royal Swedish Academy of Sciences at a press conference in Stockholm. Hinton said he was "very surprised" to receive the reward, having been told by the Academy at 1am in a California hotel. "I'm in a cheap hotel in California which doesn't have a good internet or phone connection," Hinton told the Academy. "I was going to have an MRI scan today but I'll have to cancel that!" The reward was granted to Hinton due to his important work in machine learning, the technology that powers leading AI systems like OpenAI's ChatGPT and Google's Bard. Machine learning powers AI by enabling systems to learn from data, recognize patterns, and make decisions without explicit programming. Instead of being manually programmed for every task, an AI system powered by machine learning improves its performance over time by being exposed to vast amounts of information. Who is Geoffrey Hinton? Hinton's lasting fame in AI stems primarily from his contributions to neural networks, a subset of machine learning that mimics the structure and function of the human brain. The scientist's early research in the 1980s focused on backpropagation, a method for training neural networks that allows them to learn from their mistakes. This algorithm remains fundamental to most modern neural networks and deep learning models today. Hinton had to battle skepticism of his system for many years , as neural networks struggled to outperform other methods throughout the 1990s and early 2000s. After persisting with the belief, it was validated in the early 2010s as deep learning began to produce major results in things like image recognition and speech recognition. Hinton's research group at the University of Toronto was pivotal to these successes, especially after his team won the prestigious ImageNet competition in 2012 using a deep learning model known as a convolutional neural network. The victory brought international attention to the potential of deep learning and is often cited as a turning point in the history of AI. Hinton was hired by Google's Brain team in 2013, where the scientist worked with the company to further develop its AI technology. Geoffrey Hinton's AI Warning Hinton shockingly left Google in 2023 telling the BBC that some of the dangers of AI chatbots were "quite scary". The scientist said that he previously believed Google had been a "steward" of the emerging technology. However, he said this changed when Microsoft fitted a chatbot into its Bing search engine and threatened the company's search business. Hinton also warned of the existential risk of developing AI technology. "We're biological systems and these are digital systems," he told the BBC. "And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world." Adding: "And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."
Share
Share
Copy Link
The 2024 Nobel Prizes in Physics and Chemistry recognize AI breakthroughs, igniting discussions about the evolving nature of scientific disciplines and the need to modernize Nobel categories.
The 2024 Nobel Prizes in Physics and Chemistry have thrust artificial intelligence (AI) into the spotlight, recognizing groundbreaking work in machine learning and protein structure prediction. This unprecedented focus on AI in two major scientific categories has sparked a lively debate about the nature of scientific disciplines and the relevance of traditional Nobel Prize categories in the modern era [1][2].
The Nobel Prize in Physics was awarded to Geoffrey Hinton and John Hopfield for their pioneering work on artificial neural networks. Hinton, known for his contributions to deep learning, and Hopfield, recognized for his work on associative neural networks, laid the foundations for modern machine learning techniques [1][2].
However, this decision has not been without controversy. Some physicists argue that the awarded research falls more within the domain of computer science than physics. Jonathan Pritchard, an astrophysicist at Imperial College London, expressed skepticism about classifying the work as a physics discovery [1].
The Chemistry Nobel was shared between David Baker, Demis Hassabis, and John Jumper. Baker was honored for his work on protein design, while Hassabis and Jumper, from Google DeepMind, were recognized for developing AlphaFold, an AI system that can predict protein structures with remarkable accuracy [2][4].
This award highlights the transformative impact of AI on biochemistry and structural biology. AlphaFold solved a 50-year-old problem in protein structure prediction, demonstrating the power of AI in tackling complex scientific challenges [4].
The AI-centric awards have reignited discussions about the need to modernize Nobel Prize categories. Some scientists, including the laureates themselves, have suggested that the current categories may not adequately reflect the interdisciplinary nature of modern scientific research [5].
Hinton noted that his work might be more appropriate for a Nobel Prize in computer science, while Hassabis pointed out that computer science didn't exist when the Nobel Prizes were first established [5]. This has led to calls for the Nobel Foundation to consider updating its categories to better represent contemporary scientific fields.
The recognition of AI in these prestigious awards is likely to have far-reaching effects on the scientific community. It may inspire more researchers to incorporate AI techniques into their work across various disciplines [4][5].
David Jones, a bioinformatician at University College London, suggests that the prizes recognize not only the disruptive force of AI but also the steady accumulation of knowledge in structural and computational biology [2]. This highlights the collaborative nature of scientific progress and the increasing role of interdisciplinary approaches.
As AI continues to transform scientific research, the 2024 Nobel Prizes serve as a milestone in recognizing its impact. While celebrating these achievements, the scientific community is also grappling with questions about how to categorize and honor breakthroughs that span traditional disciplinary boundaries. The debate surrounding these awards may shape the future of scientific recognition and collaboration in the age of artificial intelligence.
Reference
[1]
[3]
[4]
The 2024 Nobel Prizes in Physics and Chemistry recognize AI contributions, sparking discussions about the future role of AI in scientific discoveries and its potential to win a Nobel Prize autonomously.
5 Sources
The 2024 Nobel Prize in Physics was awarded to John Hopfield and Geoffrey Hinton for their groundbreaking work in artificial neural networks, which laid the foundation for modern machine learning and AI.
58 Sources
Researchers from Carnegie Mellon University and Calculation Consulting examine the convergence of physics, chemistry, and AI in light of recent Nobel Prizes, advocating for interdisciplinary approaches to advance artificial intelligence.
2 Sources
John J. Hopfield and Geoffrey E. Hinton receive the 2024 Nobel Prize in Physics for their groundbreaking work in artificial neural networks, which laid the foundation for modern machine learning and AI.
3 Sources
The 2024 Nobel Prize in Chemistry recognizes the groundbreaking work in AI-driven protein structure prediction and computational protein design, marking a significant milestone in the intersection of artificial intelligence and biochemistry.
61 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved