9 Sources
9 Sources
[1]
AI Superintelligence Prohibition Called for by Hundreds of High-Profile Figures
Blake has over a decade of experience writing for the web, with a focus on mobile phones, where he covered the smartphone boom of the 2010s and the broader tech scene. When he's not in front of a keyboard, you'll most likely find him playing video games, watching horror flicks, or hunting down a good churro. The rapid advancements in AI have raised quite a few eyebrows, as big tech companies like Google, Grok, Meta and OpenAI race to create the smartest models at a breakneck pace. And despite the potential benefits of what AI could do, there are just as many if not more concerns about how it could negatively impact humanity. So much so that more than 700 prominent public figures have signed a statement declaring the prohibition of AI superintelligence until its development can be done safely, and until there's a strong public buy-in for it. The statement was published Thursday and says the development of AI that can outperform humans at nearly all cognitive tasks, especially with little oversight, is concerning. Fears of everything from loss of freedom to national security risks and human extinction are all top of mind, according to the group. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. Several signatures come from prominent figures, including "godfathers of AI" Yoshua Bengio and Geoffrey Hinton, former policy makers and celebrities like Kate Bush and Joseph Gordon-Levitt. Elon Musk himself previously warned of the dangers of AI, going so far as to say that humans are "summoning a demon" with AI. Musk even signed a similar letter alongside other tech leaders in early 2023, urging a pause on AI. The Future of Life Institute also released a national poll this week, showing just 5% of Americans surveyed are in support of the current, fast, unregulated development toward superintelligence. More than half of respondents -- 64% -- said that superintelligent AI shouldn't be developed until it's proved to be safe and controllable, and 73% want robust regulation on advanced AI. Interested parties may also sign the statement, with the current signature count as of this writing at 27,700.
[2]
Worried about superintelligence? So are these AI leaders - here's why
Leaders argue that AI could existentially threaten humans.Prominent AI figures, alongside 1,300 others, endorsed the worry.The public is equally concerned about "superintelligence." The surprise release of ChatGPT just under three years ago was the starting gun for an AI race that has been rapidly accelerating ever since. Now, a group of industry experts is warning -- and not for the first time -- that AI labs should slow down before humanity drives itself off a cliff. Also: What Bill Gates really said about AI replacing coding jobs A statement published Wednesday by the Future of Life Institute (FLI), a nonprofit organization focused on existential AI risk, argues that the development of "superintelligence" -- an AI industry buzzword that usually refers to a hypothetical machine intelligence that can outperform humans on any cognitive task -- presents an existential risk and should therefore be halted until a safe pathway forward can be established. The unregulated competition among leading AI labs to build superintelligence could result in "human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction," the authors of the statement wrote. They go on to argue that a prohibition on the development of superintelligent machines could be enacted until there is (1) "broad scientific consensus that it will be done safely and controllably," as well as (2) "strong public buy-in." The petition had more than 1,300 signatures as of late Wednesday morning. Prominent signatories include Geoffrey Hinton and Yoshua Bengio, both of whom shared a Turing Award in 2018 (along with fellow researcher Yann LeCun) for their pioneering work on neural networks and are now known as two of the "Godfathers of AI." Also: What AI pioneer Yoshua Bengio is doing next to make AI safer Computer scientist Stuart Russell, Apple cofounder Steve Wozniak, Virgin Group founder Sir Richard Branson, former Trump administration Chief Strategist Steve Bannon, political commentator Glenn Beck, author Yuval Noah Harari, and many other notable figures in tech, government, and academia have also signed the statement. And they aren't the only ones who appear worried about superintelligence. On Sunday, the FLI published the results of a poll it conducted with 2,000 American adults that found that 64% of respondents "feel that superhuman AI should not be developed until it is proven safe and controllable, or should never be developed." It isn't always easy to draw a neat line between marketing bluster and technical legitimacy, especially when it comes to a technology as buzzy as AI. Also: What Zuckerberg's 'personal superintelligence' sales pitch leaves out Like artificial general intelligence, or AGI, "superintelligence" is a hazily defined term that's recently been co-opted by some tech developers to describe the next rung in the evolutionary ladder of AI: an as-yet-unrealized machine that can do anything the human brain can do, only better. In June, Meta launched an internal R&D arm devoted to building the technology, which it calls Superintelligence Labs. At around the same time, Altman published a personal blog post arguing that the advent of superintelligence was imminent. (The FLI petition cited a 2015 blog post from Altman in which he described "superhuman machine intelligence" as "probably the greatest threat to the continued existence of humanity.") Want more stories about AI? Sign up for AI Leaderboard, our weekly newsletter. The term "superintelligence" was popularized by a 2014 book by the same name by the Oxford philosopher Nick Bostrom, which was largely written as a warning about the dangers of building self-improving AI systems that could one day escape human control. Bengio, Russell, and Wozniak were also among the signatories of a 2023 open letter, also published by the FLI, that called for a six-month pause on the training of powerful AI models. Also: Google's latest AI safety report explores AI beyond human control Though that letter received widespread attention in the media and helped kindle public debate about AI safety, the momentum to quickly build and commercialize new AI models -- which, by that point, had thoroughly overtaken the tech industry -- ultimately overpowered the will to implement a wide-scale moratorium. Significant AI regulation, at least in the US, is also still lacking. That momentum has only grown as competition has spilled over the boundaries of Silicon Valley and across international borders. President Donald Trump and some prominent tech leaders like OpenAI CEO Sam Altman have framed the AI race as a geopolitical and economic competition between the US and China. At the same time, safety researchers from prominent AI companies including OpenAI, Anthropic, Meta, and Google have issued occasional, smaller-scale statements about the importance of monitoring certain components of AI models for risky behavior as the field evolves.
[3]
AI heavyweights call for end to 'superintelligence' research
UNSW Sydney provides funding as a member of The Conversation AU. I have worked in AI for more than three decades, including with pioneers such as John McCarthy, who coined the term "artificial intelligence" in 1955. In the past few years, scientific breakthroughs have produced AI tools that promise unprecedented advances in medicine, science, business and education. At the same time, leading AI companies have the stated goal to create superintelligence: not merely smarter tools, but AI systems that significantly outperform all humans on essentially all cognitive tasks. Superintelligence isn't just hype. It's a strategic goal determined by a privileged few, and backed by hundreds of billions of dollars in investment, business incentives, frontier AI technology, and some of the world's best researchers. What was once science fiction has become a concrete engineering goal for the coming decade. In response, I and hundreds of other scientists, global leaders and public figures have put our names to a public statement calling for superintelligence research to stop. What the statement says The new statement, released today by the AI safety nonprofit Future of Life Institute, is not a call for a temporary pause, as we saw in 2023. It is a short, unequivocal call for a global ban: We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in. The list of signatories represents a remarkably broad coalition, bridging divides that few other issues can. The "godfathers" of modern AI are present, such as Yoshua Bengio and Geoff Hinton. So are leading safety researchers such as UC Berkeley's Stuart Russell. But the concern has broken free of academic circles. The list includes tech and business leaders such as Apple cofounder Steve Wozniak and Virgin's Richard Branson. It includes high-level political and military figures from both sides of US politics, such as former National Security Advisor Susan Rice and former chairman of the Joint Chiefs of Staff Mike Mullen. It also includes prominent media figures such as Glenn Beck and former Trump strategist Steve Bannon, together with artists such as Will.I.am and respected historians such as Yuval Noah Harari. Why superintelligence poses a unique challenge Human intelligence has reshaped the planet in profound ways. We have rerouted rivers to generate electricity and irrigate farmland, transforming entire ecosystems. We have webbed the globe with financial markets, supply chains, air traffic systems: enormous feats of coordination that depend on our ability to reason, predict, plan, innovate and build technology. Superintelligence could extend this trajectory, but with a crucial difference. People will no longer be in control. The danger is not so much a machine that wants to destroy us, but one that pursues its goals with superhuman competence and indifference to our needs. Imagine a superintelligent agent tasked with ending climate change. It might logically decide to eliminate the species that's producing greenhouse gases. Instruct it to maximise human happiness, and it might find a way to trap every human brain in a perpetual dopamine loop. Or, in Swedish philosopher Nick Bostrom's famous example, a superintelligence tasked with producing as many paperclips as possible might try to convert all of Earth's matter, including us, into raw material for its factories. The issue is not malice but mismatch: a system that understands its instructions too literally, with the power to act cleverly and swiftly. History shows what can go wrong when our systems grow beyond our capacity to predict, contain or control them. The 2008 financial crisis began with financial instruments so intricate that even their creators could not foresee how they would interact until the entire system collapsed. Cane toads introduced in Australia to fight pests have instead devastated native species. The COVID pandemic exposed how global travel networks can turn local outbreaks into worldwide crises. Now we stand on the verge of creating something far more complex: a mind that can rewrite its own code, redesign and achieve its goals, and out-think every human combined. A history of inadequate governance For years, efforts to manage AI have focused on risks such as algorithmic bias, data privacy, and the impact of automation on jobs. These are important issues. But they fail to address the systemic risks of creating superintelligent autonomous agents. The focus has been on applications, not the ultimate stated goal of AI companies to create superintelligence. The new statement on superintelligence aims to start a global conversation not just on specific AI tools, but on the very destination AI developers are steering us toward. The goal of AI should be about creating powerful tools to serve humanity. This does not mean autonomous superintelligent agents that can operate beyond human control without aligning with human well-being. We can have a future of AI-powered medical breakthroughs, scientific discovery, and personalised education. None of these require us to build an uncontrollable superintelligence that could unilaterally decide the fate of humanity.
[4]
Apple co-founder Steve Wozniak supports interim ban on AGI
Apple co-founder Steve Wozniak is one of more than a thousand public figures to call for an interim ban on the development of AI superintelligence. Other signatories include Nobel laureates, AI pioneers, and other tech luminaries ... The statement is short and to the point. Here it is in its entirety: Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction. The succinct statement below aims to create common knowledge of the growing number of experts and public figures who oppose a rush to superintelligence. We call for a prohibition on the development of superintelligence, not lifted before there is Some other notable signatories include: Many of the same signatories have previously said that artificial general intelligence (AGI) poses as big a threat of human extinction as pandemics and nuclear war.
[5]
AI experts push to pause superintelligence
Why it matters: AI development is moving at breakneck speed with minimal oversight and with the full-throated endorsement of the Trump administration. * AI "doomers" have lost their foothold with U.S. policymakers. But they're still trying to be heard, and are highly involved in global AI policy debates. Driving the news: The call to action, organized by the Future of Life Institute, has more than 800 signatures from a diverse group, including: * AI pioneers Yoshua Bengio and Geoffrey Hinton, Apple co-founder Steve Wozniak, Sir Richard Branson, Steve Bannon, Susan Rice, will.i.am and Joseph Gordon-Levitt. * The group also released polling that found that three-quarters of U.S. adults want strong regulations on AI development, with 64% of those polled saying they want an "immediate pause" on advanced AI development, per a survey of 2,000 adults from Sept. 29 - Oct. 5. Yes, but: In early 2023, the Future of Life Institute and many of the same signatories published a similar letter calling for a six-month pause on training any models more powerful than GPT-4. * That pause was largely ignored. What they're saying: "We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in," a statement from the group's website reads.
[6]
AI heavyweights call for end to 'superintelligence' research
I have worked in AI for more than three decades, including with pioneers such as John McCarthy, who coined the term "artificial intelligence" in 1955. In the past few years, scientific breakthroughs have produced AI tools that promise unprecedented advances in medicine, science, business and education. At the same time, leading AI companies have the stated goal to create superintelligence: not merely smarter tools, but AI systems that significantly outperform all humans on essentially all cognitive tasks. Superintelligence isn't just hype. It's a strategic goal determined by a privileged few, and backed by hundreds of billions of dollars in investment, business incentives, frontier AI technology, and some of the world's best researchers. What was once science fiction has become a concrete engineering goal for the coming decade. In response, I and hundreds of other scientists, global leaders and public figures have put our names to a public statement calling for superintelligence research to stop. What the statement says The new statement, released today by the AI safety nonprofit Future of Life Institute, is not a call for a temporary pause, as we saw in 2023. It is a short, unequivocal call for a global ban: "We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in." The list of signatories represents a remarkably broad coalition, bridging divides that few other issues can. The "godfathers" of modern AI are present, such as Yoshua Bengio and Geoff Hinton. So are leading safety researchers such as UC Berkeley's Stuart Russell. But the concern has broken free of academic circles. The list includes tech and business leaders such as Apple cofounder Steve Wozniak and Virgin's Richard Branson. It includes high-level political and military figures from both sides of US politics, such as former National Security Advisor Susan Rice and former chairman of the Joint Chiefs of Staff Mike Mullen. It also includes prominent media figures such as Glenn Beck and former Trump strategist Steve Bannon, together with artists such as Will.I.am and respected historians such as Yuval Noah Harari. Why superintelligence poses a unique challenge Human intelligence has reshaped the planet in profound ways. We have rerouted rivers to generate electricity and irrigate farmland, transforming entire ecosystems. We have webbed the globe with financial markets, supply chains, air traffic systems: enormous feats of coordination that depend on our ability to reason, predict, plan, innovate and build technology. Superintelligence could extend this trajectory, but with a crucial difference. People will no longer be in control. The danger is not so much a machine that wants to destroy us, but one that pursues its goals with superhuman competence and indifference to our needs. Imagine a superintelligent agent tasked with ending climate change. It might logically decide to eliminate the species that's producing greenhouse gases. Instruct it to maximize human happiness, and it might find a way to trap every human brain in a perpetual dopamine loop. Or, in Swedish philosopher Nick Bostrom's famous example, a superintelligence tasked with producing as many paperclips as possible might try to convert all of Earth's matter, including us, into raw material for its factories. The issue is not malice but mismatch: a system that understands its instructions too literally, with the power to act cleverly and swiftly. History shows what can go wrong when our systems grow beyond our capacity to predict, contain or control them. The 2008 financial crisis began with financial instruments so intricate that even their creators could not foresee how they would interact until the entire system collapsed. Cane toads introduced in Australia to fight pests have instead devastated native species. The COVID pandemic exposed how global travel networks can turn local outbreaks into worldwide crises. Now we stand on the verge of creating something far more complex: a mind that can rewrite its own code, redesign and achieve its goals, and out-think every human combined. A history of inadequate governance For years, efforts to manage AI have focused on risks such as algorithmic bias, data privacy, and the impact of automation on jobs. These are important issues. But they fail to address the systemic risks of creating superintelligent autonomous agents. The focus has been on applications, not the ultimate stated goal of AI companies to create superintelligence. The new statement on superintelligence aims to start a global conversation not just on specific AI tools, but on the very destination AI developers are steering us toward. The goal of AI should be about creating powerful tools to serve humanity. This does not mean autonomous superintelligent agents that can operate beyond human control without aligning with human well-being. We can have a future of AI-powered medical breakthroughs, scientific discovery, and personalized education. None of these require us to build an uncontrollable superintelligence that could unilaterally decide the fate of humanity. This article is republished from The Conversation under a Creative Commons license. Read the original article.
[7]
One thing most of us seem to agree on, from Sir Stephen Fry to Steve Bannon, is that artificial superintelligence development should be paused while we figure out, y'know, the safety concerns
Artificial superintelligence is, for most of us I think, quite a scary thought. A human-like AI, far more intelligent and capable than any we've seen before, developed by companies that often seem unprepared (or uncaring) towards the potential knock-on effects of its deployment. Goody. Still, we've been told by industry leaders like Mark Zuckerberg that artificial superintelligence is coming, and we should all be very excited. Many are not, it seems, and that many includes a substantial list of celebrities, former government officials, philosophers, engineers, and scientists who've signed an online statement calling for its prohibition, created by The Future of Life Institute. The statement reads: "We call for a prohibition on the development of superintelligence, not lifted before there is: 1. Broad scientific consensus that it will be done safely and controllably, and 2. Strong public buy-in." Good luck on that last one, that's all I'll say. Still, the statement has been signed by a vast array of notable figures, from Sir Stephen Fry to former White House chief strategist Steve Bannon (via The Guardian). Who could have guessed those two would ever stand on the same side? Not me, that's for sure. Scrolling down the list, it's surprising to see the range of public figures seemingly happy to put their name down against the rise of human-like AI, at least until we can figure out, as a species, whether it can be implemented safely and controllably. Sir Richard Branson is in there. Apple co-founder Steve Wozniak, too. Oh, and Will.I.am, Grimes, and Prince Harry, among others. It makes for quite a dinner party list, at the very least. I'd love to see, say, Paolo Benanti, the current papal AI advisor, sitting alongside Joseph Gordon-Levitt discussing future technologies over a cocktail or two. Actually, the latter left a rather astute point underneath their signature: "Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc." said Gordon-Levitt. "But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? "Most people don't want that. But that's what these big tech companies mean when they talk about building 'Superintelligence'." Powerful words, Mr Gordon-Levitt. I also rather enjoyed Looper, just so you're aware. We can be friends. All joking aside, while the point against the creation of superintelligent AI without scientific consensus and public support seems a noble one, I can't help but feel that the private companies beavering away at it in their labs will take little notice. At least, not without some serious government intervention, which looks unlikely given that many administrations around the world seem only too happy to let AI investment boost their respective economies until the wheels, seemingly inevitably, fall off. But keep fighting the good fight, you motley band of dissenters, you. It's worth mentioning that anyone can sign alongside the great, the good, and the dubious listed here, so if you agree with their point, feel free to jot your details down using the form linked at the top of the page. Your name may be published alongside some esteemed company, at the very least, and won't that be fun?
[8]
Geoffrey Hinton, Yoshua Bengio sign statement urging suspension of AGI development - SiliconANGLE
Geoffrey Hinton, Yoshua Bengio sign statement urging suspension of AGI development Hundreds of scientists, politicians, entrepreneurs and artists have signed a statement urging the suspension of efforts to develop artificial general intelligence, or AGI. The brief document was published today on the website of the Future of Life Institute, a nonprofit co-founded by Skype co-creator Jaan Tallinn. Tallinn was also an early investor in DeepMind, a startup that eventually became Alphabet Inc.'s Google DeepMind machine learning lab. AGI is a hypothetical form of AI capable of performing many tasks better than humans. The signatories to today's statement argue that development of AGI should be suspended because of the technology's potential for harm. They wrote that "we call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in." The latter point ties into a survey the Future of Life Institute published on Monday. The survey, which polled 2,000 U.S. adults, found that 64% of the respondents believe efforts to develop superhuman AI should be suspended until the technology is proven safe or banned outright. The signatories to today's statement include two of the most prominent figures in AI research: Geoffrey Hinton and Yoshua Bengio. The two computer scientists developed several of the technologies that underpin today's large language models. Hinton, who won half the 2024 Nobel Prize in Physics for his contributions to AI research, invented the backpropagation algorithm that neural networks use to learn new skills. Hinton earlier received the Turing Prize, the most prestigious award in computer science, together with two other researchers. One of those researchers was Bengio, who led early work on embeddings and developed a predecessor to LLMs' attention mechanism. Today's AGI statement also drew the backing of numerous other prominent figures. The group includes four other Nobel laureates besides Hinton, dozens of professors and Apple Inc. co-founder Steve Wozniak. Beyond the academic and technology ecosystems, the statement drew the backing of a half dozen former members of Congress, three members of the European Parliament, Prince Harry, Meghan Markle, Richard Branson and others. Notably, the statement was also signed by two current employees of OpenAI and Anthropic PBC. The two companies would likely be among the first affected by initiatives to regulate AGI development.
[9]
From Tech Icons to Nobel Laureates, Global Voices Unite Against Superintelligence
Virgin Group founder Richard Branson has also signed the letter Some of the world's brightest minds and most influential individuals signed a letter on Wednesday, asking for a ban on the development of artificial general intelligence (AGI), also known as superintelligence. The list of signatories includes Apple Co-Founder Steve Wozniak; Nobel Prize laureate Geoffrey Hinton, and Turing award winner Yoshua Bengio, both known as "godfathers of AI"; as well as multiple US politicians and Hollywood celebrities. The letter insists on prohibiting the development of AI superintelligence until the world has the chance to build safeguards for it. Scientists and Tech Leaders Express Concerns Over AI Superintelligence Future of Life Institute put forward the letter titled "Statement on Superintelligence," which has been signed more than 23,000 times as of writing this. The list of signatories also includes more than 3,000 public figures from different walks of life. The statement is short and to the point, and it says, "We call for a prohibition on the development of superintelligence, not lifted before there is a broad scientific consensus that it will be done safely and controllably, and strong public buy-in." As per the letter, if this is not followed and major AI players that are actively developing AGI are not stopped, the consequences can range from economic instability and disempowerment, losses of freedom, dignity, and civil liberties, to national security risk and even "potential human extinction." Apart from the Apple Co-Founder and the godfathers of AI, other notable signatories include Virgin Group founder Richard Branson; Prince Harry and Meghan, Duke and Duchess of Sussex; the Pope's AI adviser, Paolo Benanti; former US National Security Advisor (under Barrack Obama), Susan Rice; actor Joseph Gordon-Levitt and Sir Stephen Fry, rapper Will.I.am, and others. While this is not one of the first petitions or letters signed to express concerns about superintelligence, it is the largest such effort where so many influential individuals have shown support for the initiative. Signing the letter, Bengio said, "To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use. We also need to make sure the public has a much stronger say in decisions that will shape our collective future."
Share
Share
Copy Link
Over 700 prominent figures, including AI pioneers and celebrities, have signed a statement calling for a prohibition on AI superintelligence development. The move comes amid growing concerns about the potential risks of advanced AI systems.
In a striking display of concern over the rapid advancement of artificial intelligence, more than 700 prominent figures have signed a statement calling for the prohibition of AI superintelligence development. The signatories include AI pioneers, tech leaders, celebrities, and political figures, reflecting a growing unease about the potential risks associated with highly advanced AI systems
1
.
Source: ZDNet
The statement, published by the Future of Life Institute (FLI), argues that the development of AI systems capable of outperforming humans on nearly all cognitive tasks poses significant risks. These concerns range from human economic obsolescence and loss of freedom to potential national security threats and even human extinction
2
.The signatories are calling for a halt on superintelligence development until two key conditions are met:
3
The list of signatories includes AI "godfathers" Yoshua Bengio and Geoffrey Hinton, Apple co-founder Steve Wozniak, Virgin Group founder Richard Branson, and celebrities such as Kate Bush and Joseph Gordon-Levitt
1
4
.A national poll conducted by FLI revealed that only 5% of Americans support the current fast, unregulated development towards superintelligence. Moreover, 64% believe that superintelligent AI shouldn't be developed until proven safe and controllable, while 73% want robust regulation on advanced AI
1
.Related Stories
The concept of superintelligence, popularized by philosopher Nick Bostrom, refers to a hypothetical AI system that can outperform humans on any cognitive task. While some view it as the next evolutionary step in AI development, others warn of potential catastrophic consequences
2
.
Source: Tech Xplore
Critics argue that a superintelligent AI might pursue its goals with indifference to human needs, potentially leading to unintended and harmful outcomes. Examples range from drastic solutions to climate change to the conversion of Earth's resources for singular purposes
3
.
Source: CNET
Despite previous calls for pauses in AI development, the momentum to build and commercialize new AI models has continued to grow. The race for AI supremacy has even been framed as a geopolitical and economic competition between nations
2
.As the debate intensifies, the call for a prohibition on superintelligence development marks a significant escalation in efforts to address the potential risks of advanced AI systems. The diverse coalition of signatories underscores the growing recognition of AI's far-reaching implications across various sectors of society
5
.Summarized by
Navi
[3]
1
Business and Economy

2
Technology

3
Policy and Regulation
