8 Sources
[1]
AI Apocalypse? Why language surrounding tech is sounding increasingly religious
TORONTO (AP) -- At 77 years old, Geoffrey Hinton has a new calling in life. Like a modern-day prophet, the Nobel Prize winner is raising alarms about the dangers of uncontrolled and unregulated artificial intelligence. Frequently dubbed the "Godfather of AI," Hinton is known for his pioneering work on deep learning and neural networks which helped lay the foundation for the AI technology often used today. Feeling "somewhat responsible," he began speaking publicly about his concerns in 2023 after he left his job at Google, where he worked for more than a decade. As the technology -- and investment dollars -- powering AI have advanced in recent years, so too have the stakes behind it. "It really is godlike," Hinton said. Hinton is among a growing number of prominent tech figures who speak of AI using language once reserved for the divine. OpenAI CEO Sam Altman has referred to his company's technology as a "magic intelligence in the sky," while Peter Thiel, the co-founder of PayPal and Palantir, has even argued that AI could help bring about the Antichrist. Will AI bring condemnation or salvation? There are plenty of skeptics who doubt the technology merits this kind of fear, including Dylan Baker, a former Google employee and lead research engineer at the Distributed AI Research Institute, which studies the harmful impacts of AI. "I think oftentimes they're operating from magical fantastical thinking informed by a lot of sci-fi that presumably they got in their formative years," Baker said. "They're really detached from reality." Although chatbots like ChatGPT only recently penetrated the zeitgeist, certain Silicon Valley circles have prophesied of AI's power for decades. "We're trying to wake people up," Hinton said. "To get the public to understand the risks so that the public pressures politicians to do something about it." While researchers like Hinton are warning about the existential threat they believe AI poses to humanity, there are CEOs and theorists on the other side of the spectrum who argue we are approaching a kind of technological apocalypse that will usher in a new age of human evolution. In an essay published last year titled "Machines of Loving Grace: How AI Could Transform the World for the Better," Anthropic CEO Dario Amodei lays out his vision for a future "if everything goes right with AI." The AI entrepreneur predicts "the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights." While Amodei opts for the phrase "powerful AI," others use terms like "the singularity" or "artificial general intelligence (AGI)." Though proponents of these concepts don't often agree on how to define them, they refer broadly to a hypothetical future point at which AI will surpass human-level intelligence, potentially triggering rapid, irreversible changes to society. Computer scientist and author Ray Kurzweil has been predicting since the 1990s that humans will one day merge with technology, a concept often called transhumanism. "We're not going to actually tell what comes from our own brain versus what comes from AI. It's all going to be embedded within ourselves. And it's going to make ourselves more intelligent," Kurzweil said. In his latest book, "The Singularity Is Nearer: When We Merge with AI," Kurzweil doubles down on his earlier predictions. He believes that by 2045 we will have "multiplied our own intelligence a millionfold." "Yes," he eventually conceded when asked if he considers AI to be his religion. It informs his sense of purpose. "My thoughts about the future and the future of technology and how quickly it's coming definitely affects my attitudes towards being here and what I'm doing and how I can influence other people," he said. Visions of the apocalypse bubble up Despite Thiel's explicit invocation of language from the Book of Revelation, the positive visions of an AI future are more "apocalyptic" in the historical sense of the word. "In the ancient world, apocalyptic is not negative," explains Domenico Agostini, a professor at the University of Naples L'Orientale who studies ancient apocalyptic literature. "We've completely changed the semantics of this word." The term "apocalypse" comes from the Greek word "apokalypsis," meaning "revelation." Although often associated today with the end of the world, apocalypses in ancient Jewish and Christian thought were a source of encouragement in times of hardship or persecution. "God is promising a new world," said Professor Robert Geraci, who studies religion and technology at Knox College. "In order to occupy that new world, you have to have a glorious new body that triumphs over the evil we all experience." Geraci first noticed apocalyptic language being used to describe AI's potential in the early 2000s. Kurzweil and other theorists eventually inspired him to write his 2010 book, "Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality." The language reminded him of early Christianity. "Only we're gonna slide out God and slide in ... your pick of cosmic science laws that supposedly do this and then we were going to have the same kind of glorious future to come," he said. Geraci argues this kind of language hasn't changed much since he began studying it. What surprises him is how pervasive it has become. "What was once very weird is kind of everywhere," he said. Has Silicon Valley finally found its God? One factor in the growing cult of AI is profitability. "Twenty years ago, that fantasy, true or not, wasn't really generating a lot of money," Geraci said. Now, though, "there's a financial incentive to Sam Altman saying AGI is right around the corner." But Geraci, who argues ChatGPT "isn't even remotely, vaguely, plausibly conscious," believes there may be more driving this phenomenon. Historically, the tech world has been notoriously devoid of religion. Its secular reputation had so preceded it that one episode of the satirical HBO comedy series, "Silicon Valley," revolves around "outing" a co-worker as Christian. Rather than viewing the skeptical tech world's veneration of AI as ironic, Geraci believes they're causally linked. "We human beings are deeply, profoundly, inherently religious," he said, adding that the impressive technologies behind AI might appeal to people in Silicon Valley who have already pushed aside "ordinary approaches to transcendence and meaning." No religion is without skeptics Not every Silicon Valley CEO has been converted -- even if they want in on the tech. "When people in the tech industry talk about building this one true AI, it's almost as if they think they're creating God or something," Meta CEO Mark Zuckerberg said on a podcast last year as he promoted his company's own venture into AI. Although transhumanist theories like Kurzweil's have become more widespread, they are still not ubiquitous within Silicon Valley. "The scientific case for that is in no way stronger than the case for a religious afterlife," argues Max Tegmark, a physicist and machine learning researcher at the Massachusetts Institute of Technology. Like Hinton, Tegmark has been outspoken about the potential risks of unregulated AI. In 2023, as president of the Future of Life Institute, Tegmark helped spearhead an open letter calling for powerful AI labs to "immediately pause" the training of their systems. The letter collected more than 33,000 signatures, including from Elon Musk and Apple co-founder Steve Wozniak. Tegmark considers the letter to have been successful because it helped "mainstream the conversation" about AI safety, but believes his work is far from over. With regulations and safeguards, Tegmark thinks AI can be used as a tool to do things like cure diseases and increase human productivity. But it is imperative, he argues, to stay away from the "quite fringe" race that some companies are running -- "the pseudoreligious pursuit to try to build an alternative God." "There are a lot of stories, both in religious texts and in, for example, ancient Greek mythology, about how when we humans start playing gods, it ends badly," he said. "And I feel there's a lot of hubris in San Francisco right now." ___ Associated Press religion coverage receives support through the AP's collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content.
[2]
How Silicon Valley is using religious language to talk about AI
TORONTO (AP) -- As the rapid, unregulated development of artificial intelligence continues, the language people in Silicon Valley use to describe it is becoming increasingly religious. From predicting the potential destruction of humanity to a transhumanist apocalypse where people merge with AI, here's what some of the key players are saying. ___ "I think religion will be in trouble if we create other beings. Once we start creating beings that can think for themselves and do things for themselves, maybe even have bodies if they're robots, we may start realizing we're less special than we thought. And the idea that we're very special and we were made in the image of God, that idea may go out the window." -- Nobel Prize winner Geoffrey Hinton, often dubbed the "Godfather of AI" for his pioneering work on deep learning and neural networks. ___ "By 2045, which is only 20 years from now, we'll be a million times more powerful. And we'll be able to have expertise in every field." -- author and computer scientist Ray Kurzweil, who believes humans will merge with AI. ___ "There certainly are dimensions of the technology that have become extremely powerful in the last century or two that have an apocalyptic dimension. And perhaps it's strange not to try to relate it to the biblical tradition." -- PayPal and Palantir co-founder Peter Thiel speaking to the Hoover Institution at Stanford University. ___ "I feel that the four big AI CEOs in the U.S. are modern-day prophets with four different versions of the Gospel and they're all telling the same basic story that this is so dangerous and so scary that I have to do it and nobody else." -- Max Tegmark, a physicist and machine learning researcher at the Massachusetts Institute of Technology. ___ "When people in the tech industry talk about building this one true AI, it's almost as if they think they're creating God or something." -- Meta CEO Mark Zuckerberg on a podcast promoting his company's own venture into AI. ___ "Everyone (including AI companies!) will need to do their part both to prevent risks and to fully realize the benefits. But it is a world worth fighting for. If all of this really does happen over 5 to 10 years -- the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights -- I suspect everyone watching it will be surprised by the effect it has on them." -- Anthropic CEO Dario Amodei in his essay, "Machines of Loving Grace: How AI Could Transform the World for the Better." ___ "You and I are living through this once-in-human-history transition where humans go from being the smartest thing on planet Earth to not the smartest thing on planet Earth." -- OpenAI CEO Sam Altman during an interview for TED Talks. ___ "These really big, scary problems that are complex and challenging to address -- it's so easy to gravitate towards fantastical thinking and wanting a one-size-fits-all global solution. I think it's the reason that so many people turn to cults and all sorts of really out there beliefs when the future feels scary and uncertain. I think this is not different than that. They just have billions of dollars to actually enact their ideas." -- Dylan Baker, lead research engineer at the Distributed AI Research Institute. ___ Associated Press religion coverage receives support through the AP's collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content.
[3]
AI Apocalypse? Why language surrounding tech is sounding increasingly religious
At 77 years old, Geoffrey Hinton has a new calling in life. Like a modern-day prophet, the Nobel Prize winner is raising alarms about the dangers of uncontrolled and unregulated artificial intelligence. Frequently dubbed the "Godfather of AI," Hinton is known for his pioneering work on deep learning and neural networks which helped lay the foundation for the AI technology often used today. Feeling "somewhat responsible," he began speaking publicly about his concerns in 2023 after he left his job at Google, where he worked for more than a decade. As the technology -- and investment dollars -- powering AI have advanced in recent years, so too have the stakes behind it. "It really is godlike," Hinton said. Hinton is among a growing number of prominent tech figures who speak of AI using language once reserved for the divine. OpenAI CEO Sam Altman has referred to his company's technology as a "magic intelligence in the sky," while Peter Thiel, the co-founder of PayPal and Palantir, has even argued that AI could help bring about the Antichrist. Will AI bring condemnation or salvation? There are plenty of skeptics who doubt the technology merits this kind of fear, including Dylan Baker, a former Google employee and lead research engineer at the Distributed AI Research Institute, which studies the harmful impacts of AI. "I think oftentimes they're operating from magical fantastical thinking informed by a lot of sci-fi that presumably they got in their formative years," Baker said. "They're really detached from reality." Although chatbots like ChatGPT only recently penetrated the zeitgeist, certain Silicon Valley circles have prophesied of AI's power for decades. "We're trying to wake people up," Hinton said. "To get the public to understand the risks so that the public pressures politicians to do something about it." While researchers like Hinton are warning about the existential threat they believe AI poses to humanity, there are CEOs and theorists on the other side of the spectrum who argue we are approaching a kind of technological apocalypse that will usher in a new age of human evolution. In an essay published last year titled "Machines of Loving Grace: How AI Could Transform the World for the Better," Anthropic CEO Dario Amodei lays out his vision for a future "if everything goes right with AI." The AI entrepreneur predicts "the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights." While Amodei opts for the phrase "powerful AI," others use terms like "the singularity" or "artificial general intelligence (AGI)." Though proponents of these concepts don't often agree on how to define them, they refer broadly to a hypothetical future point at which AI will surpass human-level intelligence, potentially triggering rapid, irreversible changes to society. Computer scientist and author Ray Kurzweil has been predicting since the 1990s that humans will one day merge with technology, a concept often called transhumanism. "We're not going to actually tell what comes from our own brain versus what comes from AI. It's all going to be embedded within ourselves. And it's going to make ourselves more intelligent," Kurzweil said. In his latest book, "The Singularity Is Nearer: When We Merge with AI," Kurzweil doubles down on his earlier predictions. He believes that by 2045 we will have "multiplied our own intelligence a millionfold." "Yes," he eventually conceded when asked if he considers AI to be his religion. It informs his sense of purpose. "My thoughts about the future and the future of technology and how quickly it's coming definitely affects my attitudes towards being here and what I'm doing and how I can influence other people," he said. Visions of the apocalypse bubble up Despite Thiel's explicit invocation of language from the Book of Revelation, the positive visions of an AI future are more "apocalyptic" in the historical sense of the word. "In the ancient world, apocalyptic is not negative," explains Domenico Agostini, a professor at the University of Naples L'Orientale who studies ancient apocalyptic literature. "We've completely changed the semantics of this word." The term "apocalypse" comes from the Greek word "apokalypsis," meaning "revelation." Although often associated today with the end of the world, apocalypses in ancient Jewish and Christian thought were a source of encouragement in times of hardship or persecution. "God is promising a new world," said Professor Robert Geraci, who studies religion and technology at Knox College. "In order to occupy that new world, you have to have a glorious new body that triumphs over the evil we all experience." Geraci first noticed apocalyptic language being used to describe AI's potential in the early 2000s. Kurzweil and other theorists eventually inspired him to write his 2010 book, "Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality." The language reminded him of early Christianity. "Only we're gonna slide out God and slide in ... your pick of cosmic science laws that supposedly do this and then we were going to have the same kind of glorious future to come," he said. Geraci argues this kind of language hasn't changed much since he began studying it. What surprises him is how pervasive it has become. "What was once very weird is kind of everywhere," he said. Has Silicon Valley finally found its God? One factor in the growing cult of AI is profitability. "Twenty years ago, that fantasy, true or not, wasn't really generating a lot of money," Geraci said. Now, though, "there's a financial incentive to Sam Altman saying AGI is right around the corner." But Geraci, who argues ChatGPT "isn't even remotely, vaguely, plausibly conscious," believes there may be more driving this phenomenon. Historically, the tech world has been notoriously devoid of religion. Its secular reputation had so preceded it that one episode of the satirical HBO comedy series, "Silicon Valley," revolves around "outing" a co-worker as Christian. Rather than viewing the skeptical tech world's veneration of AI as ironic, Geraci believes they're causally linked. "We human beings are deeply, profoundly, inherently religious," he said, adding that the impressive technologies behind AI might appeal to people in Silicon Valley who have already pushed aside "ordinary approaches to transcendence and meaning." No religion is without skeptics Not every Silicon Valley CEO has been converted -- even if they want in on the tech. "When people in the tech industry talk about building this one true AI, it's almost as if they think they're creating God or something," Meta CEO Mark Zuckerberg said on a podcast last year as he promoted his company's own venture into AI. Although transhumanist theories like Kurzweil's have become more widespread, they are still not ubiquitous within Silicon Valley. "The scientific case for that is in no way stronger than the case for a religious afterlife," argues Max Tegmark, a physicist and machine learning researcher at the Massachusetts Institute of Technology. Like Hinton, Tegmark has been outspoken about the potential risks of unregulated AI. In 2023, as president of the Future of Life Institute, Tegmark helped spearhead an open letter calling for powerful AI labs to "immediately pause" the training of their systems. The letter collected more than 33,000 signatures, including from Elon Musk and Apple co-founder Steve Wozniak. Tegmark considers the letter to have been successful because it helped "mainstream the conversation" about AI safety, but believes his work is far from over. With regulations and safeguards, Tegmark thinks AI can be used as a tool to do things like cure diseases and increase human productivity. But it is imperative, he argues, to stay away from the "quite fringe" race that some companies are running -- "the pseudoreligious pursuit to try to build an alternative God." "There are a lot of stories, both in religious texts and in, for example, ancient Greek mythology, about how when we humans start playing gods, it ends badly," he said. "And I feel there's a lot of hubris in San Francisco right now."
[4]
How Silicon Valley is using religious language to talk about AI
TORONTO -- As the rapid, unregulated development of artificial intelligence continues, the language people in Silicon Valley use to describe it is becoming increasingly religious. From predicting the potential destruction of humanity to a transhumanist apocalypse where people merge with AI, here's what some of the key players are saying. ___ "I think religion will be in trouble if we create other beings. Once we start creating beings that can think for themselves and do things for themselves, maybe even have bodies if they're robots, we may start realizing we're less special than we thought. And the idea that we're very special and we were made in the image of God, that idea may go out the window." -- Nobel Prize winner Geoffrey Hinton, often dubbed the "Godfather of AI" for his pioneering work on deep learning and neural networks. ___ "By 2045, which is only 20 years from now, we'll be a million times more powerful. And we'll be able to have expertise in every field." -- author and computer scientist Ray Kurzweil, who believes humans will merge with AI. ___ "There certainly are dimensions of the technology that have become extremely powerful in the last century or two that have an apocalyptic dimension. And perhaps it's strange not to try to relate it to the biblical tradition." "I feel that the four big AI CEOs in the U.S. are modern-day prophets with four different versions of the Gospel and they're all telling the same basic story that this is so dangerous and so scary that I have to do it and nobody else." -- Max Tegmark, a physicist and machine learning researcher at the Massachusetts Institute of Technology. ___ "When people in the tech industry talk about building this one true AI, it's almost as if they think they're creating God or something." "Everyone (including AI companies!) will need to do their part both to prevent risks and to fully realize the benefits. But it is a world worth fighting for. If all of this really does happen over 5 to 10 years -- the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights -- I suspect everyone watching it will be surprised by the effect it has on them." -- Anthropic CEO Dario Amodei in his essay, "Machines of Loving Grace: How AI Could Transform the World for the Better." ___ "You and I are living through this once-in-human-history transition where humans go from being the smartest thing on planet Earth to not the smartest thing on planet Earth." "These really big, scary problems that are complex and challenging to address -- it's so easy to gravitate towards fantastical thinking and wanting a one-size-fits-all global solution. I think it's the reason that so many people turn to cults and all sorts of really out there beliefs when the future feels scary and uncertain. I think this is not different than that. They just have billions of dollars to actually enact their ideas." -- Dylan Baker, lead research engineer at the Distributed AI Research Institute.
[5]
AI Apocalypse? Why language surrounding tech is sounding increasingly religious
TORONTO (AP) -- At 77 years old, Geoffrey Hinton has a new calling in life. Like a modern-day prophet, the Nobel Prize winner is raising alarms about the dangers of uncontrolled and unregulated artificial intelligence. Frequently dubbed the "Godfather of AI," Hinton is known for his pioneering work on deep learning and neural networks which helped lay the foundation for the AI technology often used today. Feeling "somewhat responsible," he began speaking publicly about his concerns in 2023 after he left his job at Google, where he worked for more than a decade. As the technology -- and investment dollars -- powering AI have advanced in recent years, so too have the stakes behind it. "It really is godlike," Hinton said. Hinton is among a growing number of prominent tech figures who speak of AI using language once reserved for the divine. OpenAI CEO Sam Altman has referred to his company's technology as a "magic intelligence in the sky," while Peter Thiel, the co-founder of PayPal and Palantir, has even argued that AI could help bring about the Antichrist. Will AI bring condemnation or salvation? There are plenty of skeptics who doubt the technology merits this kind of fear, including Dylan Baker, a former Google employee and lead research engineer at the Distributed AI Research Institute, which studies the harmful impacts of AI. "I think oftentimes they're operating from magical fantastical thinking informed by a lot of sci-fi that presumably they got in their formative years," Baker said. "They're really detached from reality." Although chatbots like ChatGPT only recently penetrated the zeitgeist, certain Silicon Valley circles have prophesied of AI's power for decades. "We're trying to wake people up," Hinton said. "To get the public to understand the risks so that the public pressures politicians to do something about it." While researchers like Hinton are warning about the existential threat they believe AI poses to humanity, there are CEOs and theorists on the other side of the spectrum who argue we are approaching a kind of technological apocalypse that will usher in a new age of human evolution. In an essay published last year titled "Machines of Loving Grace: How AI Could Transform the World for the Better," Anthropic CEO Dario Amodei lays out his vision for a future "if everything goes right with AI." The AI entrepreneur predicts "the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights." While Amodei opts for the phrase "powerful AI," others use terms like "the singularity" or "artificial general intelligence (AGI)." Though proponents of these concepts don't often agree on how to define them, they refer broadly to a hypothetical future point at which AI will surpass human-level intelligence, potentially triggering rapid, irreversible changes to society. Computer scientist and author Ray Kurzweil has been predicting since the 1990s that humans will one day merge with technology, a concept often called transhumanism. "We're not going to actually tell what comes from our own brain versus what comes from AI. It's all going to be embedded within ourselves. And it's going to make ourselves more intelligent," Kurzweil said. In his latest book, "The Singularity Is Nearer: When We Merge with AI," Kurzweil doubles down on his earlier predictions. He believes that by 2045 we will have "multiplied our own intelligence a millionfold." "Yes," he eventually conceded when asked if he considers AI to be his religion. It informs his sense of purpose. "My thoughts about the future and the future of technology and how quickly it's coming definitely affects my attitudes towards being here and what I'm doing and how I can influence other people," he said. Visions of the apocalypse bubble up Despite Thiel's explicit invocation of language from the Book of Revelation, the positive visions of an AI future are more "apocalyptic" in the historical sense of the word. "In the ancient world, apocalyptic is not negative," explains Domenico Agostini, a professor at the University of Naples L'Orientale who studies ancient apocalyptic literature. "We've completely changed the semantics of this word." The term "apocalypse" comes from the Greek word "apokalypsis," meaning "revelation." Although often associated today with the end of the world, apocalypses in ancient Jewish and Christian thought were a source of encouragement in times of hardship or persecution. "God is promising a new world," said Professor Robert Geraci, who studies religion and technology at Knox College. "In order to occupy that new world, you have to have a glorious new body that triumphs over the evil we all experience." Geraci first noticed apocalyptic language being used to describe AI's potential in the early 2000s. Kurzweil and other theorists eventually inspired him to write his 2010 book, "Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality." The language reminded him of early Christianity. "Only we're gonna slide out God and slide in ... your pick of cosmic science laws that supposedly do this and then we were going to have the same kind of glorious future to come," he said. Geraci argues this kind of language hasn't changed much since he began studying it. What surprises him is how pervasive it has become. "What was once very weird is kind of everywhere," he said. Has Silicon Valley finally found its God? One factor in the growing cult of AI is profitability. "Twenty years ago, that fantasy, true or not, wasn't really generating a lot of money," Geraci said. Now, though, "there's a financial incentive to Sam Altman saying AGI is right around the corner." But Geraci, who argues ChatGPT "isn't even remotely, vaguely, plausibly conscious," believes there may be more driving this phenomenon. Historically, the tech world has been notoriously devoid of religion. Its secular reputation had so preceded it that one episode of the satirical HBO comedy series, "Silicon Valley," revolves around "outing" a co-worker as Christian. Rather than viewing the skeptical tech world's veneration of AI as ironic, Geraci believes they're causally linked. "We human beings are deeply, profoundly, inherently religious," he said, adding that the impressive technologies behind AI might appeal to people in Silicon Valley who have already pushed aside "ordinary approaches to transcendence and meaning." No religion is without skeptics Not every Silicon Valley CEO has been converted -- even if they want in on the tech. "When people in the tech industry talk about building this one true AI, it's almost as if they think they're creating God or something," Meta CEO Mark Zuckerberg said on a podcast last year as he promoted his company's own venture into AI. Although transhumanist theories like Kurzweil's have become more widespread, they are still not ubiquitous within Silicon Valley. "The scientific case for that is in no way stronger than the case for a religious afterlife," argues Max Tegmark, a physicist and machine learning researcher at the Massachusetts Institute of Technology. Like Hinton, Tegmark has been outspoken about the potential risks of unregulated AI. In 2023, as president of the Future of Life Institute, Tegmark helped spearhead an open letter calling for powerful AI labs to "immediately pause" the training of their systems. The letter collected more than 33,000 signatures, including from Elon Musk and Apple co-founder Steve Wozniak. Tegmark considers the letter to have been successful because it helped "mainstream the conversation" about AI safety, but believes his work is far from over. With regulations and safeguards, Tegmark thinks AI can be used as a tool to do things like cure diseases and increase human productivity. But it is imperative, he argues, to stay away from the "quite fringe" race that some companies are running -- "the pseudoreligious pursuit to try to build an alternative God." "There are a lot of stories, both in religious texts and in, for example, ancient Greek mythology, about how when we humans start playing gods, it ends badly," he said. "And I feel there's a lot of hubris in San Francisco right now." ___ Associated Press religion coverage receives support through the AP's collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content.
[6]
How Silicon Valley is using religious language to talk about AI
TORONTO (AP) -- As the rapid, unregulated development of artificial intelligence continues, the language people in Silicon Valley use to describe it is becoming increasingly religious. From predicting the potential destruction of humanity to a transhumanist apocalypse where people merge with AI, here's what some of the key players are saying. ___ "I think religion will be in trouble if we create other beings. Once we start creating beings that can think for themselves and do things for themselves, maybe even have bodies if they're robots, we may start realizing we're less special than we thought. And the idea that we're very special and we were made in the image of God, that idea may go out the window." -- Nobel Prize winner Geoffrey Hinton, often dubbed the "Godfather of AI" for his pioneering work on deep learning and neural networks. ___ "By 2045, which is only 20 years from now, we'll be a million times more powerful. And we'll be able to have expertise in every field." -- author and computer scientist Ray Kurzweil, who believes humans will merge with AI. ___ "There certainly are dimensions of the technology that have become extremely powerful in the last century or two that have an apocalyptic dimension. And perhaps it's strange not to try to relate it to the biblical tradition." -- PayPal and Palantir co-founder Peter Thiel speaking to the Hoover Institution at Stanford University. ___ "I feel that the four big AI CEOs in the U.S. are modern-day prophets with four different versions of the Gospel and they're all telling the same basic story that this is so dangerous and so scary that I have to do it and nobody else." -- Max Tegmark, a physicist and machine learning researcher at the Massachusetts Institute of Technology. ___ "When people in the tech industry talk about building this one true AI, it's almost as if they think they're creating God or something." -- Meta CEO Mark Zuckerberg on a podcast promoting his company's own venture into AI. ___ "Everyone (including AI companies!) will need to do their part both to prevent risks and to fully realize the benefits. But it is a world worth fighting for. If all of this really does happen over 5 to 10 years -- the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights -- I suspect everyone watching it will be surprised by the effect it has on them." -- Anthropic CEO Dario Amodei in his essay, "Machines of Loving Grace: How AI Could Transform the World for the Better." ___ "You and I are living through this once-in-human-history transition where humans go from being the smartest thing on planet Earth to not the smartest thing on planet Earth." -- OpenAI CEO Sam Altman during an interview for TED Talks. ___ "These really big, scary problems that are complex and challenging to address -- it's so easy to gravitate towards fantastical thinking and wanting a one-size-fits-all global solution. I think it's the reason that so many people turn to cults and all sorts of really out there beliefs when the future feels scary and uncertain. I think this is not different than that. They just have billions of dollars to actually enact their ideas." -- Dylan Baker, lead research engineer at the Distributed AI Research Institute. ___ Associated Press religion coverage receives support through the AP's collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content.
[7]
AI Apocalypse? Why Language Surrounding Tech Is Sounding Increasingly Religious
TORONTO (AP) -- At 77 years old, Geoffrey Hinton has a new calling in life. Like a modern-day prophet, the Nobel Prize winner is raising alarms about the dangers of uncontrolled and unregulated artificial intelligence. Frequently dubbed the "Godfather of AI," Hinton is known for his pioneering work on deep learning and neural networks which helped lay the foundation for the AI technology often used today. Feeling "somewhat responsible," he began speaking publicly about his concerns in 2023 after he left his job at Google, where he worked for more than a decade. As the technology -- and investment dollars -- powering AI have advanced in recent years, so too have the stakes behind it. "It really is godlike," Hinton said. Hinton is among a growing number of prominent tech figures who speak of AI using language once reserved for the divine. OpenAI CEO Sam Altman has referred to his company's technology as a "magic intelligence in the sky," while Peter Thiel, the co-founder of PayPal and Palantir, has even argued that AI could help bring about the Antichrist. Will AI bring condemnation or salvation? There are plenty of skeptics who doubt the technology merits this kind of fear, including Dylan Baker, a former Google employee and lead research engineer at the Distributed AI Research Institute, which studies the harmful impacts of AI. "I think oftentimes they're operating from magical fantastical thinking informed by a lot of sci-fi that presumably they got in their formative years," Baker said. "They're really detached from reality." Although chatbots like ChatGPT only recently penetrated the zeitgeist, certain Silicon Valley circles have prophesied of AI's power for decades. "We're trying to wake people up," Hinton said. "To get the public to understand the risks so that the public pressures politicians to do something about it." While researchers like Hinton are warning about the existential threat they believe AI poses to humanity, there are CEOs and theorists on the other side of the spectrum who argue we are approaching a kind of technological apocalypse that will usher in a new age of human evolution. In an essay published last year titled "Machines of Loving Grace: How AI Could Transform the World for the Better," Anthropic CEO Dario Amodei lays out his vision for a future "if everything goes right with AI." The AI entrepreneur predicts "the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights." While Amodei opts for the phrase "powerful AI," others use terms like "the singularity" or "artificial general intelligence (AGI)." Though proponents of these concepts don't often agree on how to define them, they refer broadly to a hypothetical future point at which AI will surpass human-level intelligence, potentially triggering rapid, irreversible changes to society. Computer scientist and author Ray Kurzweil has been predicting since the 1990s that humans will one day merge with technology, a concept often called transhumanism. "We're not going to actually tell what comes from our own brain versus what comes from AI. It's all going to be embedded within ourselves. And it's going to make ourselves more intelligent," Kurzweil said. In his latest book, "The Singularity Is Nearer: When We Merge with AI," Kurzweil doubles down on his earlier predictions. He believes that by 2045 we will have "multiplied our own intelligence a millionfold." "Yes," he eventually conceded when asked if he considers AI to be his religion. It informs his sense of purpose. "My thoughts about the future and the future of technology and how quickly it's coming definitely affects my attitudes towards being here and what I'm doing and how I can influence other people," he said. Visions of the apocalypse bubble up Despite Thiel's explicit invocation of language from the Book of Revelation, the positive visions of an AI future are more "apocalyptic" in the historical sense of the word. "In the ancient world, apocalyptic is not negative," explains Domenico Agostini, a professor at the University of Naples L'Orientale who studies ancient apocalyptic literature. "We've completely changed the semantics of this word." The term "apocalypse" comes from the Greek word "apokalypsis," meaning "revelation." Although often associated today with the end of the world, apocalypses in ancient Jewish and Christian thought were a source of encouragement in times of hardship or persecution. "God is promising a new world," said Professor Robert Geraci, who studies religion and technology at Knox College. "In order to occupy that new world, you have to have a glorious new body that triumphs over the evil we all experience." Geraci first noticed apocalyptic language being used to describe AI's potential in the early 2000s. Kurzweil and other theorists eventually inspired him to write his 2010 book, "Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality." The language reminded him of early Christianity. "Only we're gonna slide out God and slide in ... your pick of cosmic science laws that supposedly do this and then we were going to have the same kind of glorious future to come," he said. Geraci argues this kind of language hasn't changed much since he began studying it. What surprises him is how pervasive it has become. "What was once very weird is kind of everywhere," he said. Has Silicon Valley finally found its God? One factor in the growing cult of AI is profitability. "Twenty years ago, that fantasy, true or not, wasn't really generating a lot of money," Geraci said. Now, though, "there's a financial incentive to Sam Altman saying AGI is right around the corner." But Geraci, who argues ChatGPT "isn't even remotely, vaguely, plausibly conscious," believes there may be more driving this phenomenon. Historically, the tech world has been notoriously devoid of religion. Its secular reputation had so preceded it that one episode of the satirical HBO comedy series, "Silicon Valley," revolves around "outing" a co-worker as Christian. Rather than viewing the skeptical tech world's veneration of AI as ironic, Geraci believes they're causally linked. "We human beings are deeply, profoundly, inherently religious," he said, adding that the impressive technologies behind AI might appeal to people in Silicon Valley who have already pushed aside "ordinary approaches to transcendence and meaning." No religion is without skeptics Not every Silicon Valley CEO has been converted -- even if they want in on the tech. "When people in the tech industry talk about building this one true AI, it's almost as if they think they're creating God or something," Meta CEO Mark Zuckerberg said on a podcast last year as he promoted his company's own venture into AI. Although transhumanist theories like Kurzweil's have become more widespread, they are still not ubiquitous within Silicon Valley. "The scientific case for that is in no way stronger than the case for a religious afterlife," argues Max Tegmark, a physicist and machine learning researcher at the Massachusetts Institute of Technology. Like Hinton, Tegmark has been outspoken about the potential risks of unregulated AI. In 2023, as president of the Future of Life Institute, Tegmark helped spearhead an open letter calling for powerful AI labs to "immediately pause" the training of their systems. The letter collected more than 33,000 signatures, including from Elon Musk and Apple co-founder Steve Wozniak. Tegmark considers the letter to have been successful because it helped "mainstream the conversation" about AI safety, but believes his work is far from over. With regulations and safeguards, Tegmark thinks AI can be used as a tool to do things like cure diseases and increase human productivity. But it is imperative, he argues, to stay away from the "quite fringe" race that some companies are running -- "the pseudoreligious pursuit to try to build an alternative God." "There are a lot of stories, both in religious texts and in, for example, ancient Greek mythology, about how when we humans start playing gods, it ends badly," he said. "And I feel there's a lot of hubris in San Francisco right now." ___ Associated Press religion coverage receives support through the AP's collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content.
[8]
How Silicon Valley is using religious language to talk about AI - The Economic Times
Tech leaders are using religious language to describe the rise of artificial intelligence. Some, like Geoffrey Hinton and Sam Altman, warn it could surpass human intelligence. Others, including Ray Kurzweil and Dario Amodei, see hope in merging with AI or solving global problems, reflecting both fear and optimism.As the rapid, unregulated development of artificial intelligence continues, the language people in Silicon Valley use to describe it is becoming increasingly religious. From predicting the potential destruction of humanity to a transhumanist apocalypse where people merge with AI, here's what some of the key players are saying. "I think religion will be in trouble if we create other beings. Once we start creating beings that can think for themselves and do things for themselves, maybe even have bodies if they're robots, we may start realizing we're less special than we thought. And the idea that we're very special and we were made in the image of God, that idea may go out the window." - Nobel Prize winner Geoffrey Hinton, often dubbed the "Godfather of AI" for his pioneering work on deep learning and neural networks. "By 2045, which is only 20 years from now, we'll be a million times more powerful. And we'll be able to have expertise in every field." - author and computer scientist Ray Kurzweil, who believes humans will merge with AI. "There certainly are dimensions of the technology that have become extremely powerful in the last century or two that have an apocalyptic dimension. And perhaps it's strange not to try to relate it to the biblical tradition." PayPal and Palantir co-founder Peter Thiel speaking to the Hoover Institution at Stanford University. "I feel that the four big AI CEOs in the U.S. are modern-day prophets with four different versions of the Gospel and they're all telling the same basic story that this is so dangerous and so scary that I have to do it and nobody else." - Max Tegmark, a physicist and machine learning researcher at the Massachusetts Institute of Technology. "When people in the tech industry talk about building this one true AI, it's almost as if they think they're creating God or something." - Meta CEO Mark Zuckerberg on a podcast promoting his company's own venture into AI. "Everyone (including AI companies!) will need to do their part both to prevent risks and to fully realize the benefits. But it is a world worth fighting for. If all of this really does happen over 5 to 10 years - the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights - I suspect everyone watching it will be surprised by the effect it has on them." Anthropic CEO Dario Amodei in his essay, "Machines of Loving Grace: How AI Could Transform the World for the Better." "You and I are living through this once-in-human-history transition where humans go from being the smartest thing on planet Earth to not the smartest thing on planet Earth." - OpenAI CEO Sam Altman during an interview for TED Talks. "These really big, scary problems that are complex and challenging to address - it's so easy to gravitate towards fantastical thinking and wanting a one-size-fits-all global solution. I think it's the reason that so many people turn to cults and all sorts of really out there beliefs when the future feels scary and uncertain. I think this is not different than that. They just have billions of dollars to actually enact their ideas." - Dylan Baker, lead research engineer at the Distributed AI Research Institute.
Share
Copy Link
As AI technology advances, prominent figures in Silicon Valley are increasingly using religious and apocalyptic language to describe its potential impact, sparking debates about the future of humanity and technology.
In recent years, the language surrounding artificial intelligence (AI) in Silicon Valley has taken on an increasingly religious tone. Prominent figures in the tech industry are now using terms and concepts traditionally associated with divine entities to describe the potential of AI technology 123.
Geoffrey Hinton, often referred to as the "Godfather of AI," has become a modern-day prophet at the age of 77. After leaving his position at Google, Hinton began publicly voicing his concerns about the dangers of uncontrolled and unregulated AI. He describes the technology as "godlike," emphasizing its immense power and potential impact on humanity 135.
Source: AP NEWS
The discourse surrounding AI has polarized into two main camps: those who warn of existential threats and those who envision a technological utopia. On one side, researchers like Hinton caution about the risks AI poses to humanity. On the other, CEOs and theorists predict a transformative future where AI ushers in a new age of human evolution 135.
Dario Amodei, CEO of Anthropic, paints an optimistic picture in his essay "Machines of Loving Grace: How AI Could Transform the World for the Better." He envisions a future where AI contributes to defeating diseases, lifting billions out of poverty, and sparking a renaissance in liberal democracy and human rights 135.
Source: NBC News
Concepts like "the singularity" and "artificial general intelligence" (AGI) are becoming increasingly prevalent in AI discussions. These terms refer to a hypothetical future point where AI surpasses human-level intelligence, potentially triggering rapid, irreversible societal changes 135.
Ray Kurzweil, a computer scientist and author, has long been a proponent of transhumanism β the idea that humans will merge with technology. In his latest book, "The Singularity Is Nearer: When We Merge with AI," Kurzweil predicts that by 2045, human intelligence will be amplified a millionfold through this merger 135.
Source: AP NEWS
Despite the often ominous connotations of "apocalypse" in modern usage, scholars point out that the term originally had a more positive meaning. Domenico Agostini, a professor studying ancient apocalyptic literature, explains that in the ancient world, apocalyptic visions were sources of encouragement during difficult times 135.
Robert Geraci, a professor studying religion and technology, draws parallels between early Christian apocalyptic thought and current AI discourse. He notes that both promise a new world and a transformed existence, with AI taking the place of divine intervention in modern narratives 135.
Not everyone is convinced by these grand visions. Dylan Baker, a lead research engineer at the Distributed AI Research Institute, argues that much of this thinking is detached from reality and rooted in science fiction rather than scientific fact 124.
Some experts, like Geraci, suggest that the proliferation of this quasi-religious AI discourse may be partly driven by financial incentives. As AI becomes increasingly profitable, there's a growing motivation for industry leaders to promote grandiose visions of AI's potential 135.
The use of religious language in AI discourse has become increasingly widespread. What was once considered fringe thinking is now commonplace in Silicon Valley. This shift reflects not only technological advancements but also changing cultural attitudes towards AI and its role in shaping our future 135.
Summarized by
Navi
[5]
Meta has been found to have created flirty chatbots impersonating celebrities without permission, including risquΓ© content and child celebrity bots, sparking concerns over privacy, ethics, and potential legal issues.
6 Sources
Technology
9 hrs ago
6 Sources
Technology
9 hrs ago
Meta announces significant changes to its AI chatbot policies, focusing on teen safety by restricting conversations on sensitive topics and limiting access to certain AI characters.
8 Sources
Technology
9 hrs ago
8 Sources
Technology
9 hrs ago
Meta Platforms is considering collaborations with AI rivals Google and OpenAI to improve its AI applications, potentially integrating external models while developing its own Llama 5.
4 Sources
Technology
9 hrs ago
4 Sources
Technology
9 hrs ago
Reliance Industries unveils Reliance Intelligence, a new AI-focused subsidiary, aiming to build gigawatt-scale data centers, forge global partnerships, and deliver AI services across various sectors in India.
9 Sources
Technology
1 day ago
9 Sources
Technology
1 day ago
Mukesh Ambani's Reliance Industries announces strategic partnerships with Google and Meta to build India's AI infrastructure, aiming to democratize AI access for businesses across the country.
6 Sources
Technology
1 day ago
6 Sources
Technology
1 day ago