Curated by THEOUTPOST
On Thu, 24 Oct, 8:11 AM UTC
15 Sources
[1]
Is OpenAI ready for AGI? Not so much anymore now - Softonic
The company disbands the team responsible for advising it on AGI following the departure of its chief advisor OpenAI has decided to dissolve its AGI Readiness team, which was tasked with assessing whether the company is prepared to handle an advanced artificial intelligence and if our society could manage such a powerful technology. Miles Brundage, the principal advisor of this team, announced his departure through a statement on Substack, where he explained that the decision was due to the desire to adopt a more impartial approach. "I thought my research would have more impact outside of OpenAI," he stated. General artificial intelligence (AGI) is a field of AI that seeks to create systems that can match or surpass human intelligence in various tasks. This topic has generated a great debate both socially and in the technological world, with positions ranging from those who believe that AGI is close to becoming a reality, to those who claim it will never be achieved. Brundage expressed in his message that "neither OpenAI nor any other lab is prepared, and the world isn't either". He also revealed his intention to found a non-profit organization or join an existing one to focus on research policies in the field of AI. "It is unlikely that AI will be as safe and beneficial as possible without a concerted effort," he commented, emphasizing the need for a collective approach to optimize the benefits and minimize the risks of AI. The former members of the AGI Readiness team will be reassigned to other teams within the company, while OpenAI expressed its support for Brundage's decision, expressing gratitude for the advisor's contributions and valuing his dedication to independent research. This change follows the dissolution in May of OpenAI's Superalignment team, which focused on the development of advanced control systems for AI, and comes after the recent departures of other senior executives from OpenAI, including Chief Technology Officer Mira Murati and research leads Bob McGrew and Barret Zoph.
[2]
OpenAI's AGI readiness team has been dissolved
OpenAI has once again trimmed its safety-focused operations, dissolving its AGI Readiness team -- a group dedicated to preparing for so-called artificial general intelligence. Miles Brundage, a senior advisor on the team, broke the news in a post on his Substack Wednesday, which also confirmed his departure from the company. In the post, Brundage hinted at a growing need for independence in his work, suggesting that his exit reflects a desire for more freedom as he continues to explore the rapidly evolving AI landscape. "I decided that I want to impact and influence AI's development from outside the industry rather than inside," Brundage wrote, adding, "I've done much of what I set out to do at OpenAI." Brundage went on to express broader concerns, stating, "Neither OpenAI nor any other frontier lab is ready, and the world is also not ready." According to Brundage, this isn't a lone sentiment -- many within OpenAI's senior ranks share these reservations. As for the AGI Readiness team, former members are set to be reassigned across other divisions within OpenAI. A company spokesperson told CNBC they back Brundage's choice to move on. Still, the timing is rough for OpenAI, which has been navigating an exodus of senior leadership at a moment when stability is key. Although it was able to snag a top AI researcher from Microsoft, that addition doesn't fill the many recent gaps in OpenAI's upper ranks. The leadership shifts and team dissolutions aren't helping to quell mounting concerns around OpenAI's push toward AGI, particularly since its controversial announcement of a plan to become a fully for-profit company after beginning its life as a nonprofit. Back in May, the company disbanded its SuperAlignment team -- a group tasked with pioneering "scientific and technical breakthroughs to steer and control AI systems much smarter than us." Around the same time, OpenAI also reassigned its top AI safety leader, raising eyebrows within the AI ethics community and beyond.
[3]
OpenAI is disbanding yet another AI safety team
OpenAI is disbanding the team responsible for advising the startup on how ready the world is for higher levels of artificial intelligence. Miles Brundage, head of OpenAI's "AGI Readiness" team, is leaving the company this week, he said in a Substack post on Wednesday. Brundage said he plans to start a non-profit or join an existing one focused on researching AI policy. A sub-team for economic research will move under OpenAI's new chief economist, Ronnie Chatterji, Brundage said, while other members will be reassigned to other teams. The outgoing senior advisor said neither the ChatGPT maker nor other frontier AI labs are ready for AGI, or artificial general intelligence, which is considered to be human-level intelligence. Part of the reason for starting or joining a non-profit, Brundage said, is because "I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so." Brundage called his advising role "essentially my dream job" but said he wants "to spend more time working on issues that cut across the whole AI industry, to have more freedom to publish and to be more independent." One of his main reasons for leaving is high opportunity costs due to OpenAI's massive profile, and his feeling that his work would "be more impactful" outside of the AI industry. Brundage also said he wants "to be less biased" and has "done much of what I set out to do at OpenAI." "We fully support Miles' decision to pursue his policy research outside industry and are deeply grateful for his contributions," an OpenAI spokesperson said in a statement shared with Quartz. "His plan to go all-in on independent research on AI policy gives him the opportunity to have an impact on a wider scale, and we are excited to learn from his work and follow its impact. We're confident that in his new role, Miles will continue to raise the bar for the quality of policymaking in industry and government." In May, OpenAI disbanded its "superalignment" team, which was focused on AI's existential dangers. OpenAI co-founder and former chief scientist Ilya Sutskever and Jan Leike, his superalignment team co-founder, both resigned from the AI startup that month. Mira Murati, the company's former chief technology officer, announced her departure from the company in September and was followed by OpenAI chief research officer Bob McGrew and vice president of research Barret Zoph. A week after the resignations, OpenAI completed a $6.6 billion funding round, leading to a valuation of $157 billion. The startup is also reportedly working on a plan to restructure its nonprofit status, which could give chief executive Sam Altman major equity.
[4]
No one is ready for AGI -- not even OpenAI
His exit marks the latest in a series of high-profile departures from OpenAI's safety teams. Jan Leike, a prominent researcher, left after claiming that "safety culture and processes have taken a backseat to shiny products." Cofounder Ilya Sutskever also departed to launch his own AI startup focused on safe AGI development. The dissolution of Brundage's "AGI Readiness" team, coming just months after the company disbanded its "Superalignment" team dedicated to long-term AI risk mitigation, highlights mounting tensions between OpenAI's original mission and its commercial ambitions. The company reportedly faces pressure to transition from a nonprofit to a for-profit public benefit corporation within two years -- or risk returning funds from its recent $6.6 billion investment round. This shift toward commercialization has long concerned Brundage, who expressed reservations back in 2019 when OpenAI first established its for-profit division.
[5]
OpenAI's AGI Czar Quits, Saying the Company Isn't ready For What It's Building
OpenAI's researcher in charge of making sure the company (and the world) is prepared for the advent of artificial general intelligence (AGI) has resigned -- and is warning that nobody is ready for what's coming next. In a post on his personal Substack, the firm's newly-resigned AGI readiness czar Miles Brundage said quitting his "dream job" after six years has been difficult. He says he's doing so because he feels a great responsibility regarding the purportedly human-level artificial intelligence he believes OpenAI is ushering into existence. "I decided," Brundage wrote, "that I want to impact and influence AI's development from outside the industry rather than inside." When it comes to being prepared to handle the still-theoretical tech, the researcher was unequivocal. "In short, neither OpenAI nor any other frontier lab is ready," he wrote, "and the world is also not ready." After that bold declaration, Brundage went on to say that he's shared his outlook with OpenAI's leadership. He added, for what it's worth, that he thinks "AGI is an overloaded phrase that implies more of a binary way of thinking than actually makes sense." Instead of there being some before-and-after AGI framework, the researcher said that there are, to quote many a hallucinogen enthusiast, levels to this shit. Indeed, Brundage said he was instrumental in the creation of OpenAI's five-step scale of AI/AGI levels that got leaked to Bloomberg over the summer. On that scale, which ends with AI that can "do the work of an organization," OpenAI believes the world is currently at the precipice of level two, which would be characterized by AI that has the capability of human-level reasoning. All the same, Brundage insists that both OpenAI and the world at large remain unprepared for the next-generation AI systems being built. Notably, Brundage still believes that while AGI can benefit all of humanity, it won't automatically do so. Instead, the humans in charge of making it -- and regulating it -- have to go about doing so deliberately. That caveat suggests that he may not think OpenAI is being sufficiently deliberate in how it approaches AGI stewardship. With the senior researcher's exit, Brundage says OpenAI is reassigning members of its AGI readiness team to other groups within the organization. This dissolution comes less than six months after it kiboshed its AI safety team, which doesn't exactly bode well as this latest big-name resignation shakes up the company's leadership.
[6]
OpenAI loses another senior figure
Artificial General Intelligence readiness advisor Miles Brundage bails, because nobody is ready OpenAI has lost another senior staffer, and on his way out the door this one warned the company - and all other AI shops - are just not ready for artificial general intelligence. The departing exec is Miles Brundage who on Friday will cease working as senior advisor for AGI readiness. AGI - artificial general intelligence - is the term used to describe AI that appears to have the same cognitive abilities as a human. Like people, AGIs could theoretically learn almost anything. Preparing for the arrival of AGI is regarded as an important and responsible action, given the possibility AGIs could do better than humans in some fields. AI may remove the obligation to work for a living Brundage revealed his departure in a Substack post in which he explained his decision as a desire to contemplate OpenAI's AGI readiness, and the world, without having his view biased by being an employee. In his post, he wrote that Open AI has "gaps" in its readiness - but so does every other advanced AI lab. "Neither OpenAI nor any other frontier lab is ready, and the world is also not ready," he wrote, adding "To be clear, I don't think this is a controversial statement among OpenAI's leadership, and notably, that's a different question from whether the company and the world are on track to be ready at the relevant time (though I think the gaps remaining are substantial enough that I'll be working on AI policy for the rest of my career)." Despite that state of unreadiness, his post reveals that OpenAI's AGI readiness team will be dispersed among other teams, as part of a re-org. Brundage also opined that "AI and AGI benefiting all of humanity is not automatic and requires deliberate choices to be made by decision-makers in governments, non-profits, civil society, and industry, and this needs to be informed by robust public discussion." Those efforts need to consider both safety and equitable distribution of benefits, he suggested. "I think AI capabilities are improving very quickly and policymakers need to act more urgently," he noted, but suggested recent experience in fields such as pandemic preparedness mean action won't happen unless leaders can communicate a sense of urgency. "I think we don't have all the AI policy ideas we need, and many of the ideas floating around are bad or too vague to be confidently judged," he added. Brundage wrote that one idea he disagrees with "is for democratic countries to race against autocratic countries." "I think that having and fostering such a zero-sum mentality increases the likelihood of corner-cutting on safety and security," he suggested, before urging "academics, companies, civil society, and policymakers [to] work collaboratively to find a way to ensure that Western AI development is not seen as a threat to other countries' safety or regime stability, so that we can work across borders to solve the very thorny safety and security challenges ahead." He affirms that collaboration is important - despite his belief that it is very likely "Western countries continue to substantially outcompete China on AI." The Middle Kingdom and other autocratic nations have enough tech to "build very sophisticated capabilities," so failing to engage on safety manage risk would be dangerously short-sighted. While Brundage sees many reasons to consider AI safety, he also sees plenty of upside. "I think it's likely that in the coming years (not decades), AI could enable sufficient economic growth that an early retirement at a high standard of living is easily achievable," he wrote. "Before that, there will likely be a period in which it is easier to automate tasks that can be done remotely." But there may be some tough years first. "In the near term, I worry a lot about AI disrupting opportunities for people who desperately want work." If we get it right, he thinks humanity will have the option to "remove the obligation to work for a living." "That is not something we're prepared for politically, culturally, or otherwise, and needs to be part of the policy conversation," he suggested. "A naïve shift towards a post-work world risks civilizational stagnation (see: Wall-E), and much more thought and debate about this is needed." Post-work is also a matter OpenAI considers quite often, as Brundage joins the org's CTO, chief research officer and research VP, plus co-founder Ilya Sutskever, as recent departures from the AI standard-bearer. Brundage played down the significance of his own departure, writing "I have been here for over six years, which is pretty long by OpenAI standards (it has grown a lot over those six years!)." ®
[7]
OpenAI's 'AGI Readiness' chief quits -- 'I want to be less biased'
Long-time OpenAI safety researcher Miles Brundage will start or join another artificial intelligence firm after quitting his position as head of "AGI Readiness" and leaving the company on Oct. 23. Brundage, who joined OpenAI in 2018 as a researcher, was positioned as the company's top safety officer. In a Substack post discussing his departure he claimed that OpenAI's "AGI Readiness" team will be disbanded, potentially leaving the firm without a dedicated safety department. AGI, or artificial general intelligence, is a theoretical paradigm wherein an AI model would be capable of performing any task a human could do if given the same resources. Brundage's team's primary purpose appeared to have been providing policy suggestions and safety oversight for the company's continuing development toward AGI. His exit, according to the post, was prompted by the desire to focus on independent research and advocacy. He cited several reasons for his departure including the desire to "be less biased" toward his corporate setting, and to seek opportunities through either entrepreneurship or at a different organization. Per Brundage: "I plan to start a new nonprofit (and/or join an existing nonprofit) and will work on AI policy research and advocacy. I will probably do some mix of research and advocacy but the details and ratios are TBD." The researcher might not be out of a job for long, however, as he's already made a potential business connection with Brendan McCord, founder and Chief of Cosmos Institute, an organization dedicated to developing technology with a humanist approach. AGI Readiness The significance of Brundage's departure from OpenAI remains unclear. Cointelegraph reached out to OpenAI and was given the following statement: "We fully support Miles' decision to pursue his policy research outside industry and are deeply grateful for his contributions. His plan to go all-in on independent research on AI policy gives him the opportunity to have an impact on a wider scale, and we are excited to learn from his work and follow its impact. We're confident that in his new role, Miles will continue to raise the bar for the quality of policymaking in industry and government." The company declined to respond to Cointelegraph's questions concerning the future of the AGI Readiness team. Prior to the creation of the AGI Readiness team, the company's primary AGI oversight department was the "Superalignment" team co-led by company cofounders Ilya Sutskever and Jan Leike. That team was disbanded after Leike left the company in May, claiming he'd reached his "breaking point" after disagreements with management. Sutskever quit the firm the next month and has since gone on to found his own competing artificial intelligence company already valued at an estimated $5 billion. OpenAI had already lost co-founder and chief scientist Andrej Karpathy by this point, who'd left to start his own company in February. Another co-founder, John Schulman, called it quits in August to join competitor Anthropic. Meanwhile, former chief technology officer Mira Murati is reportedly raising funds to start a competing artificial intelligence company as well.
[8]
OpenAI Senior Advisor Joins Exodus, Warns 'Neither OpenAI Nor Any Other Frontier Lab Is Ready' For AGI
Another senior figure has departed OpenAI, warning that neither his former employer nor any other artificial intelligence company is adequately prepared for the development of artificial general intelligence. What Happened: Miles Brundage, who served as OpenAI's senior advisor for AGI Readiness, announced his resignation in a Thursday blog post, marking the latest in a series of high-profile departures from the AI research company. "Neither OpenAI nor any other frontier lab is ready, and the world is also not ready," Brundage wrote, emphasizing concerns about the industry's preparedness for advanced AI development. Brundage's exit follows several other significant departures from OpenAI, including CTO Mira Murati and key members of the company's safety team, including Ilya Sutskever and Jan Leike, who led the superalignment team. See Also: Marc Benioff Warns Microsoft's Copilot 'Spills Corporate Data,' Emphasizes Security Gaps Ahead Of Salesforce Agentforce Launch During his six-year tenure at OpenAI, Brundage played crucial roles in shaping the company's deployment practices, including establishing its external red teaming program and driving system card development. He plans to launch a nonprofit focused on AI policy research and advocacy. The departure comes as OpenAI continues to attract significant investment, having secured over $6.5 billion in funding at a $157 billion valuation. However, Brundage expressed concerns about the broader AI industry's approach to safety and security, particularly regarding systems capable of posing catastrophic risks. "I think AI capabilities are improving very quickly and policymakers need to act more urgently," Brundage stated, emphasizing the need for enhanced regulation and safety measures across the AI industry. Read Next: Bill Gates Net Worth: Billionaire's $50 Million Donation To Kamala Harris Is Like An Average Family Giving $318.90 Image Via Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[9]
OpenAI senior safety staffer leaves company to write unbiased warning about coming AI
A senior staffer in OpenAI's AGI readiness team has left the company and posted a warning about how OpenAI and the world simply aren't ready. OpenAI has suffered yet another blow, as a senior staffer in the company's AGI Readiness team, a team dedicated to advising OpenAI on the impact of the powerful AI models it's creating and how ready the world is for them, has left the company. This was promptly followed by a warning published to the former OpenAI's staffer's Substack account. The former OpenAI senior staffer is Miles Brundage, who, as of Friday this week, will no longer be working at OpenAI's AGI Readiness team. For those that don't know, AGI stands for Artificial General Intelligence (AGI), which is the description of an AI model with the same level of cognitive abilities as a human across all fields. This level of sophistication has yet to be fully achieved, but given the potential impact of such a system coming online or potentially falling into the wrong hands, guardrail teams such as the AGI Readiness team were formed. However, Brundage states in his post that OpenAI has "gaps" in its readiness policy, but they aren't alone in this problem as every other AI lab also does. According to Brundage, OpenAI, and any other AI company, along with the world, isn't ready for AGI. Additionally, the post by the former OpenAI staffer revealed his departure triggered a complete disbanding of the AGI Readiness team, which comes at a time when OpenAI is attempting an internal restructuring into a for-profit business.
[10]
OpenAI disbands another safety team, head advisor for 'AGI Readiness' resigns
OpenAI is disbanding its "AGI Readiness" team, which advised the company on OpenAI's own capacity to handle increasingly powerful AI and the world's readiness to manage that technology, according to the head of the team. On Wednesday, Miles Brundage, senior advisor for AGI Readiness, announced his departure from the company via a Substack post. He wrote that his primary reasons were that the opportunity cost had become too high and he thought his research would be more impactful externally, that he wanted to be less biased and that he had accomplished what he set out to at OpenAI. Brundage also wrote that, as far as how OpenAI and the world is doing on AGI readiness, "Neither OpenAI nor any other frontier lab is ready, and the world is also not ready." Brundage plans to start his own nonprofit, or join an existing one, to focus on AI policy research and advocacy. He added that "AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so." Former AGI Readiness team members will be reassigned to other teams, according to the post. "We fully support Miles' decision to pursue his policy research outside industry and are deeply grateful for his contributions," an OpenAI spokesperson told CNBC. "His plan to go all-in on independent research on AI policy gives him the opportunity to have an impact on a wider scale, and we are excited to learn from his work and follow its impact. We're confident that in his new role, Miles will continue to raise the bar for the quality of policymaking in industry and government." In May, OpenAI decided to disband its Superalignment team, which focused on the long-term risks of AI, just one year after it announced the group, a person familiar with the situation confirmed to CNBC at the time. News of the AGI Readiness team's disbandment follows the OpenAI board's potential plans to restructure the firm to a for-profit business, and after three executives -- CTO Mira Murati, research chief Bob McGrew and research VP Barret Zoph -- announced their departure on the same day last month. Earlier in October, OpenAI closed its buzzy funding round at a valuation of $157 billion, including the $6.6 billion the company raised from an extensive roster of investment firms and big tech companies. It also received a $4 billion revolving line of credit, bringing its total liquidity to more than $10 billion. The company expects about $5 billion in losses on $3.7 billion in revenue this year, CNBC confirmed with a source familiar last month. And in September, OpenAI announced that its Safety and Security Committee, which the company introduced in May as it dealt with controversy over security processes, would become an independent board oversight committee. It recently wrapped up its 90-day review evaluating OpenAI's processes and safeguards and then made recommendations to the board, with the findings also released in a public blog post. News of the executive departures and board changes also follows a summer of mounting safety concerns and controversies surrounding OpenAI, which along with Google, Microsoft, Meta and other companies is at the helm of a generative AI arms race -- a market that is predicted to top $1 trillion in revenue within a decade -- as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors. In July, OpenAI reassigned Aleksander Madry, one of OpenAI's top safety executives, to a job focused on AI reasoning instead, sources familiar with the situation confirmed to CNBC at the time. Madry was OpenAI's head of preparedness, a team that was "tasked with tracking, evaluating, forecasting, and helping protect against catastrophic risks related to frontier AI models," according to a bio for Madry on a Princeton University AI initiative website. Madry will still work on core AI safety work in his new role, OpenAI told CNBC at the time. The decision to reassign Madry came around the same time that Democratic senators sent a letter to OpenAI CEO Sam Altman concerning "questions about how OpenAI is addressing emerging safety concerns." The letter, which was viewed by CNBC, also stated, "We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company's identification and mitigation of cybersecurity threats." Microsoft gave up its observer seat on OpenAI's board in July, stating in a letter viewed by CNBC that it can now step aside because it's satisfied with the construction of the startup's board, which had been revamped since the uprising that led to the brief ouster of Altman and threatened Microsoft's massive investment in the company. But in June, a group of current and former OpenAI employees published an open letter describing concerns about the artificial intelligence industry's rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up. "AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this," the employees wrote at the time. Days after the letter was published, a source familiar to the mater confirmed to CNBC that the Federal Trade Commission and the Department of Justice were set to open antitrust investigations into OpenAI, Microsoft and Nvidia, focusing on the companies' conduct. FTC Chair Lina Khan has described her agency's action as a "market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers." The current and former employees wrote in the June letter that AI companies have "substantial non-public information" about what their technology can do, the extent of the safety measures they've put in place and the risk levels that technology has for different types of harm. "We also understand the serious risks posed by these technologies," they wrote, adding the companies "currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily." OpenAI's Superalignment team, announced last year and disbanded in May, had focused on "scientific and technical breakthroughs to steer and control AI systems much smarter than us." At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years. The team was disbanded after its leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the startup in May. Leike wrote in a post on X that OpenAI's "safety culture and processes have taken a backseat to shiny products." Altman said at the time on X he was sad to see Leike leave and that OpenAI had more work to do. Soon afterward, co-founder Greg Brockman posted a statement attributed to Brockman and the CEO on X, asserting the company has "raised awareness of the risks and opportunities of AGI so that the world can better prepare for it." "I joined because I thought OpenAI would be the best place in the world to do this research," Leike wrote on X at the time. "However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point." Leike wrote that he believes much more of the company's bandwidth should be focused on security, monitoring, preparedness, safety and societal impact. "These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," he wrote at the time. "Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done." Leike added that OpenAI must become a "safety-first AGI company." "Building smarter-than-human machines is an inherently dangerous endeavor," he wrote on X. "OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products."
[11]
OpenAI's AGI Readiness Questioned After Key Departure
The recent departure of Miles Brundage, a prominent researcher and manager at OpenAI, has sent ripples through the artificial intelligence community. Brundage's exit was accompanied by a thought-provoking statement on the readiness for Artificial General Intelligence (AGI), raising significant concerns about the preparedness of both OpenAI and the global community to handle the profound impacts of this fantastic technology. Imagine waking up one day to a world where machines possess the same level of intelligence as humans, capable of performing any intellectual task we can. However, as exciting as this prospect might be, it also brings with it a wave of uncertainty and concern. Recently, Miles Brundage, a respected researcher and manager at OpenAI, made headlines with his departure from the company, leaving behind a powerful statement on the readiness -- or lack thereof -- for AGI. His insights have sparked a crucial conversation about whether we, as a society, are truly prepared to embrace and manage the profound changes AGI could bring. Brundage's departure highlights a pressing issue that cannot be ignored: the world is not ready for AGI's fantastic impacts. Despite the rapid advancements in AI technology, there are significant gaps in our preparedness, both at the organizational level within pioneering labs like OpenAI and globally. The challenges are multifaceted, ranging from making sure the safety and ethical deployment of AI to addressing potential economic disruptions and societal shifts. Yet, amidst these concerns, there is a glimmer of hope. Brundage's call for independent voices in AI policy discussions and the development of robust governance frameworks hints at a path forward. By fostering a shared understanding of AI's potential upsides and downsides, we can begin to build the resilience needed to navigate this new frontier responsibly. Brundage's departure underscores a pressing issue: the lack of comprehensive preparedness for AGI across the board. His insights reveal that neither OpenAI nor any leading research lab is fully equipped to manage the advent of AGI. The potential impacts of AGI are vast and complex, encompassing: The world remains largely unprepared to navigate these changes, posing significant risks as AGI could transform various aspects of society, from economic structures to social dynamics. This lack of readiness is not just a technical issue but a multifaceted challenge that requires immediate attention. AI development often prioritizes technological advancement over safety, creating a precarious balance. In a competitive, capitalistic environment, companies face immense pressure to innovate rapidly, sometimes at the expense of making sure robust safety measures. This dynamic can lead to the deployment of advanced technologies without fully understanding or mitigating their risks. The challenge lies in striking a balance between the drive for technological progress and the imperative of making sure AI safety. This requires: Responsible AI development is not just an ethical imperative but a necessity for long-term sustainability and public trust in AI technologies. Here are more guides from our previous articles and guides related to Artificial General Intelligence (AGI) that you may find helpful. Effective AI governance requires a shared understanding of both the benefits and potential downsides of AI technologies. Developing technical tools and a robust regulatory infrastructure is essential to manage the risks associated with AGI. This involves: Societal resilience is also crucial, as communities must be prepared to navigate the fantastic impacts of AI. Policymakers and stakeholders need to collaborate to create frameworks that ensure AI technologies are developed and deployed responsibly, with a focus on ethical considerations and societal well-being. The rise of AI could lead to significant economic and societal shifts. Job opportunities may be disrupted, potentially exacerbating economic inequality. There is also the possibility of a post-work society, where early retirement and high living standards become more common. However, this transition could also result in: Addressing these potential outcomes requires proactive planning and policy interventions to ensure that the benefits of AI are distributed equitably across society. Given the rapid advancements in AI, there is an urgent need for policymakers to act. Existing legal frameworks must be adapted, and new institutions may be necessary to manage the risks associated with AI. This includes: Deliberate policy choices are required to ensure that AGI's benefits are realized and distributed equitably across society. The debate on how to ensure AI benefits all of humanity is crucial, as is addressing the growing gap between free and paid AI capabilities. The potential benefits of AGI are not guaranteed and require careful planning and policy intervention. Making sure that AI technologies benefit humanity as a whole involves addressing complex ethical, economic, and social challenges. This includes: Brundage's departure from OpenAI underscores the critical need for comprehensive strategies to tackle the multifaceted challenges posed by AGI development and deployment. As the world moves closer to realizing AGI, the importance of proactive and inclusive policy action cannot be overstated. The journey towards AGI is not just a technological endeavor but a societal one. It requires the collective effort of researchers, policymakers, industry leaders, and the public to ensure that the development of AGI aligns with human values and benefits all of humanity. The time to act is now, as the decisions made today will shape the AI-driven world of tomorrow.
[12]
OpenAI Winds Down 'AGI Readiness' Team as Policy Leader Exits
Miles Brundage, an OpenAI policy leader focused on ensuring the safety of its upcoming artificial intelligence, announced in a post on X on Wednesday that he would be leaving the company so he can "have more freedom to publish" research on the topic and "be more independent." His exit adds onto the recent string of departures of employees associated with AI safety, such as co-founder and
[13]
OpenAI Exodus Continues: AGI Readiness Advisor Miles Brundage Resigns
Brundage urges OpenAI employees to speak their minds and raise concerns inside the company. After Mira Murati resigned from OpenAI last month, Miles Brundage has now left the company, in another high-profile exit. Brundage served as the Head of Policy Research at OpenAI and recently, he was made Senior Advisor for the AGI Readiness team. His work was instrumental in safely deploying AI models on ChatGPT and led red teaming efforts at OpenAI. The new "System Card" we see for OpenAI models is thanks to his vision. In his X post, Brundage says, "I think I'll have more impact as a policy researcher/advocate in the non-profit sector, where I'll have more of an ability to publish freely and more independence." As I have noted in my piece on OpenAI's internal conflicts, the company's shift toward profit-driven products over AI research and safety, is pushing many longtime researchers to leave. OpenAI is also working to make its non-profit board toothless and become a for-profit corporation. This radical shift in OpenAI's culture is forcing many to quit. He further mentions, "OpenAI has a lot of difficult decisions ahead, and won't make the right decisions if we succumb to groupthink," urging OpenAI employees to raise concerns inside the company. More crucially, with Miles Brundage's departure, OpenAI is disbanding the AGI Readiness team. Those members will be absorbed by other teams and some projects will move to the Mission Alignment team. Brundage has shared his thoughts in more detail on Substack. Apart from that, The New York Times interviewed a former OpenAI researcher, Suchir Balaji, who says the company broke copyright law for AI training. Balaji quit the company in August because "he no longer wanted to contribute to technologies that he believed would bring society more harm than benefit." With such high-profile exits, what do you think about OpenAI's new direction? Is it going to take AI safety seriously or focus more on shipping commercial products? Let us know in the comments below.
[14]
OpenAI Policy Expert Miles Brundage Resigns Amid New Models Launch
Brundage's exit follows significant departures, signaling changes at OpenAI. OpenAI's Senior Advisor for AGI Readiness, Miles Brundage, has resigned from the company after six years. In his resignation letter, Brundage noted that he is leaving to pursue AI policy research in an area that is not in the technology industry. His move occurs at a time when Sam Altman's OpenAI is struggling with internal issues and transformations, such as introducing new products like consistency models to spur the advancement of AI. Miles Brundage became an OpenAI member in 2018 and helped with various policy as well as safety issues. His work mainly revolved around the proper management and utilization of the latest and most efficient AI systems like ChatGPT. Over the years, Brundage has contributed to the development of the company's red teaming program and was involved in the creation of "system card" reports about the strengths and weaknesses of OpenAI's artificial intelligence models. Source: X As the AGI readiness team member, Brundage provided ethic guidance to executives, including CEO Sam Altman on the issues linked to AI. He was instrumental in the development of OpenAI's safety culture during a period of the company's development. "Working at OpenAI is an incredibly high-impact opportunity," Brundage noted on his social media page, adding that it was a tough choice to step down. He applauded the mission of the firm but also stressed the importance of more independent researchers in the discussions of the AI policy. Miles Brundage's departure is notable as part of a series of changes at OpenAI, with CTO Mira Murati and research VP Barret Zoph also leaving the company in recent weeks. Sam Altman endorsed Brundage's decision to walk away from OpenAI, saying that the external policy research work Brundage plans to do will be good for OpenAI. OpenAI's economic research division that was previously under AGI readiness team will be managed by the new Chief Economist of the company Ronnie Chatterji. Joshua Achiam, head of mission alignment, will be taking over some of the responsibilities previously handled by Brundage in the projects. The next role that Miles Brundage plans to take is to concentrate on the regulation of AI, the effects of AI to the economy, and the future safety of artificial intelligence. He says these are important factors as they help in addressing the challenges that are associated with the application of artificial intelligence across the industry including the use of models such as the consistency models. Concurrently, Sam Altman's OpenAI has introduced consistency models, a new approach to speeding up artificial intelligence's sampling processes. These models, which are designed to generate high-quality samples faster than traditional diffusion models, represent a significant advancement in artificial intelligence technology. The launch of these models is part of the firm's broader effort to scale up its capabilities while addressing efficiency challenges especially after securing $6.6 billion in funding. The development of consistency models comes at a time when the company is facing heightened scrutiny over its practices, including allegations of copyright violations in training its models. Former OpenAI employees, such as Suchir Balaji, have raised concerns about the company's methods, contributing to the ongoing debate about how AI technologies should be governed.
[15]
OpenAI's reputational double-whammy
On Wednesday, Miles Brundage -- who had currently being leading a team charged with thinking about policies to help both the company and society at large prepare the advent of "artificial general intelligence" or AGI -- announced he was departing the company on Friday after more than six years so he could continue his work with fewer constraints. In a lengthy Substack post, Brundage said OpenAI had placed increasingly restrictive limits on what he could say in published research. He also said that, by founding or joining an AI policy non-profit, he hoped to become more effective in warning people of the urgency around AI's dangers, as "claims to this effect are often dismissed as hype when they come from industry." Brundage's post did not take any overt swipes at his soon-to-be-former employer -- indeed, he listed CEO Sam Altman as one of many people who provided "input on earlier versions of this draft" -- but it did complain at length about AI companies in general "not necessarily [giving] AI safety and security the attention it deserves by default." "There are many reasons for this, one of which is a misalignment between private and societal interests, which regulation can help reduce. There are also difficulties around credible commitments to and verification of safety levels, which further incentivize corner-cutting," Brundage wrote. "Corner-cutting occurs across a range of areas, including prevention of harmfully biased and hallucinated outputs as well as investment in preventing the catastrophic risks on the horizon." Brundage's departure extends a string of high-profile resignations from OpenAI this year -- including its Mira Murati, its chief technology officer, as well Ilya Sutskever, a cofounder of the company and its former chief scientist -- many of which were either explicitly or likely related to the company's shifting stance on AI safety. Brundage's departure extends a string of high-profile resignations from OpenAI this year -- including its Mira Murati, its chief technology officer, as well as Ilya Sutskever, a co-founder of the company and its former chief scientist -- many of which were either explicitly or likely related to the company's shifting stance on AI safety. OpenAI was initially founded as a research house for the development of safe AI, but over time the need for hefty outside funding -- it recently raised a $6.6 billion round at a $157 billion valuation -- has gradually tilted the scales towards its for-profit side, which is likely to soon formally become OpenAI's dominant structural component. Co-founders Sutskever and John Schulman both left OpenAI this year to boost their focuses on safe AI. Sutskever founded his own company and Schulman joined OpenAI arch-rival Anthropic, as did Jan Leike, a key colleague of Sutskever's who declared that "over the past years, safety culture and processes [at OpenAI] have taken a backseat to shiny products." Already by August, it had become clear that around half of OpenAI's safety-focused staff had departed in recent months -- and that was before the dramatic exit of Murati, who frequently found herself having to adjudicate arguments between the firm's safety-first researchers and its more gung-ho commercial team, as Fortune reported. For example, OpenAI's staffers were given just nine days to test the safety of the firm's powerful GPT4-o mode before its launch, according to sources familiar with the situation. In further sign that OpenAI's shifting safety focus, Brundage said that the AGI Readiness team he led is being disbanded, with its staff being "distributed among other teams." Its economic research sub-team is becoming the responsibility of new OpenAI chief economist Ronnie Chatterji, he said. He did not specify how the other staff were being redeployed. It is also worth noting that Brundage is not the first person at OpenAI to face problems over the research they wish to publish. After last year's dramatic and short-lived ouster of Altman by OpenAI's safety-focused board, it emerged that Altman had previously laid into then-board-member Helen Toner because she co-authored an AI safety paper that implicitly criticized the company. Concerns about OpenAI's culture and method were also heightened by another story on Wednesday. The New York Times carried a major piece on Suchir Balaji, an AI researcher who spent nearly four years at OpenAI before leaving in August. Balaji says he left because he realized that OpenAI was breaking copyright law in the way it trained its models on copyrighted data from the web, and because he decided that chatbots like ChatGPT were more harmful than beneficial for society. Again, OpenAI's transmogrification from research outfit to money-spinner is central here. "With a research project, you can, generally speaking, train on any data. That was the mind-set at the time," Balaji told the Times. Now he claims that AI models threaten the commercial viability of the businesses that generated that data in the first place, saying: "This is not a sustainable model for the internet ecosystem as a whole." OpenAI and many of its peers have been sued by copyright holders over that training, which involved copying seas of data so that the companies' systems could ingest and learn from it. Those AI models are not thought to contain whole copies of the data as such, and they rarely output close copies in response to users' prompts -- it's the initial, unauthorized copying that the suits are generally targeting. The standard defense in such cases is for companies accused of violating copyright to argue that the way they are using copyrighted works should constitute "fair use" -- that copyright was not infringed because the companies transformed the copyrighted works into something else, in a non-exploitative way, used them in a way that did not directly compete with the original copyright holders or prevent them from possibly exploiting the work in a similar fashion, or served the public interest. The defense is easier to apply to non-commercial use cases -- and is always decided by judges on a case by case basis. In a Wednesday blog post, Balaji dove into the relevant U.S. copyright law and assessed how its tests for establishing "fair use" related to OpenAI's data practices. He alleged that the advent of ChatGPT had negatively affected traffic to destinations like the developer Q&A site Stack Overflow, saying ChatGPT's output could in some cases substitute for the information found on that site. He also presented mathematical reasoning that, he claimed, could be used to determine links between an AI model's output and its training data. Balaji is a computer scientist and not a lawyer. And there are plenty of copyright lawyers who do think a fair use defense of using copyrighted works in the training of AI models should be successful. However, Balaji's intervention will no doubt catch the attention of the lawyers representing the publishers and book authors that have sued OpenAI for copyright infringement. It seems likely that his insider analysis will end up playing some role in these cases, the outcome of which could determine the future economics of generative AI, and possibly the futures of companies such as OpenAI. It is rare for AI companies' employees to go public with their concerns over copyright. Until now, the most significant case has probably been that of Ed Newton-Rex, who was head of audio at Stability AI before quitting last November with the claim that "today's generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on, so I don't see how using copyrighted works to train generative AI models of this nature can be considered fair use." "We build our AI models using publicly available data, in a manner protected by fair use and related principles, and supported by longstanding and widely accepted legal precedents," an OpenAI spokesperson said in a statement. "We view this principle as fair to creators, necessary for innovators, and critical for U.S. competitiveness." Meanwhile, OpenAI's spokesperson said Brundage's "plan to go all-in on independent research on AI policy gives him the opportunity to have an impact on a wider scale, and we are excited to learn from his work and follow its impact." "We're confident that in his new role, Miles will continue to raise the bar for the quality of policymaking in industry and government," they said. Brundage had seen the scope of his job at OpenAI narrowed over his career with the company, going from the development of AI safety testing methodologies and research into current national and international governance issues related to AI to an exclusive focus on the the handling a potential superhuman AGI, rather than AI's near-term safety risks. Meanwhile, OpenAI has hired a growing cast of heavy-hitting policy experts, many with extensive political, national security, or diplomatic experience, to head teams looking at various aspects of AI governance and policy. It hired Anna Makanju, a former Obama administration national security official who had worked in policy roles at SpaceX's Starlink and Facebook, to oversee its initial outreach to government officials both in Washington, D.C., and around the globe. She is currently OpenAI's vice president of global impact. More recently, it brought in veteran political operative Chris Lehane, who had also been in a communications and policy role at Airbnb, to be its vice president of global affairs. Chatterji, who is taking over the economics team that formerly reported to Brundage, formerly worked in various advisory roles in President Joe Biden's and President Barack Obama's White Houses and also served as chief economist at the Department of Commerce. It is not uncommon at fast-growing technology companies to see early employees have their roles circumscribed by the later addition of senior staff. In Silicon Valley, this is often referred to as "getting layered." And, although it is not explicitly mentioned in Brundage's blog post, it may be that the loss of his economic unit to Chatterji, coming after the previous loss of some of his near-term AI policy research to Makanju and Lehane, was a final straw. Brundage did not immediately respond to requests to comment for this story. Brundage used his post to set out the issues on which he will now focus. These include: assessing and forecasting AI progress; the regulation of frontier AI safety and security; AI's economic impacts; the acceleration of positive use cases for AI; policy around the distribution of AI hardware; and the high-level "overall AI grand strategy." He warned that "neither OpenAI nor any other frontier lab" was really ready for the advent of AGI, nor was the outside world. "To be clear, I don't think this is a controversial statement among OpenAI's leadership," he stressed, before arguing that people should still go work at the company as long as they "take seriously the fact that their actions and statements contribute to the culture of the organization, and may create positive or negative path dependencies as the organization begins to steward extremely advanced capabilities." Brundage noted that OpenAI had offered him funding, compute credits, and even early model access, to aid his upcoming work. However, he said he still hadn't decided whether to take up those offers, as they "may compromise the reality and/or perception of independence."
Share
Share
Copy Link
OpenAI has disbanded its AGI Readiness team following the resignation of senior advisor Miles Brundage, who warns that neither the company nor the world is prepared for advanced AI.
OpenAI, the prominent artificial intelligence research company, has dissolved its AGI Readiness team, a group dedicated to assessing the company's preparedness for advanced artificial intelligence 1. This decision comes in the wake of the departure of Miles Brundage, the team's principal advisor, who announced his resignation through a statement on Substack 2.
In his departure statement, Brundage expressed significant concerns about the readiness for Artificial General Intelligence (AGI):
"Neither OpenAI nor any other frontier lab is ready, and the world is also not ready," Brundage stated 3. He emphasized the need for a concerted effort to ensure AI's safety and beneficial impact, suggesting that his research would have more impact outside of OpenAI [1].
Following the dissolution of the AGI Readiness team, OpenAI plans to reassign former team members to other divisions within the company [2]. A sub-team focused on economic research will be moved under OpenAI's new chief economist, Ronnie Chatterji [3].
This latest development follows a series of high-profile departures and team dissolutions at OpenAI:
The company is reportedly facing pressure to transition from a nonprofit to a for-profit public benefit corporation within two years [4]. This shift comes after OpenAI completed a $6.6 billion funding round, leading to a valuation of $157 billion [3].
The dissolution of the AGI Readiness team and previous safety-focused teams has raised concerns in the AI ethics community [2]. Brundage's departure and warnings highlight the ongoing debate about the readiness of AI companies and society at large for the potential advent of AGI 5.
Brundage has expressed his intention to found a non-profit organization or join an existing one to focus on AI policy research [1]. He believes that working outside the industry will allow him to have a broader impact on AI development and policy-making [3].
As the AI industry continues to evolve rapidly, the departure of key safety-focused personnel and the dissolution of dedicated teams at a leading company like OpenAI may have significant implications for the future development and regulation of advanced AI systems.
Reference
[2]
[4]
Several senior AI safety researchers have left OpenAI, citing shifts in company culture and concerns about the prioritization of AI safety in the development of advanced AI systems.
3 Sources
OpenAI, the company behind ChatGPT, faces a significant leadership shakeup as several top executives, including CTO Mira Murati, resign. This comes as the company considers transitioning to a for-profit model and seeks new funding.
7 Sources
OpenAI experiences a significant brain drain as key technical leaders depart, raising questions about the company's future direction and ability to maintain its competitive edge in AI research and development.
3 Sources
John Schulman, a co-founder of OpenAI, has left the company to join Anthropic, a rival AI firm. This move marks another significant departure from OpenAI's original founding team, leaving only three of the initial eleven co-founders still with the company.
3 Sources
OpenAI, the artificial intelligence research laboratory, has reassigned its AI safety leader, Aleksander Madry, to a new project. This move is part of a broader reorganization within the company, signaling potential shifts in its approach to AI safety and development.
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved