Curated by THEOUTPOST
On Tue, 17 Sept, 12:06 AM UTC
15 Sources
[1]
OpenAI announces new independent board oversight committee focused on safety
OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024. OpenAI on Monday said its Safety and Security Committee, which the company introduced in May as it dealt with controversy over security processes, will become an independent board oversight committee. The group will be chaired by Zico Kolter, director of the machine learning department at Carnegie Mellon University's school of computer science. Other members include Adam D'Angelo, an OpenAI board member and co-founder of Quora, former NSA chief and board member Paul Nakasone, and Nicole Seligman, former executive vice president at Sony. The committee will oversee "the safety and security processes guiding OpenAI's model deployment and development," the company said. It recently wrapped up its 90-day review evaluating OpenAI's processes and safeguards and then made recommendations to the board. OpenAI is releasing the group's findings as a public blog post. OpenAI, the Microsoft-backed startup behind ChatGPT and SearchGPT, is currently pursuing a funding round that would value the company at more than $150 billion, according to sources familiar with the situation who asked not to be named because details of the round haven't been made public. Thrive Capital is leading the round and plans to invest $1 billion, and Tiger Global is planning to join as well. Microsoft, Nvidia and Apple are reportedly also in talks to invest. The committee's five key recommendations included the need to establish independent governance for safety and security, enhance security measures, be transparent about OpenAI's work, collaborate with external organizations; and unify the company's safety frameworks. Last week, OpenAI released o1, a preview version of its new AI model focused on reasoning and "solving hard problems." The company said the committee "reviewed the safety and security criteria that OpenAI used to assess OpenAI o1's fitness for launch," as well as safety evaluation results. The committee will "along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed." While OpenAI has been in hyper-growth mode since late 2022, when it launched ChatGPT, it's been simultaneously riddled with controversy and high-level employee departures, with some current and former employees concerned that the company is growing too quickly to operate safely. In July, Democratic senators sent a letter to OpenAI CEO Sam Altman concerning "questions about how OpenAI is addressing emerging safety concerns." The prior month, a group of current and former OpenAI employees published an open letter describing concerns about a lack of oversight and an absence of whistleblower protections for those who wish to speak up. And in May, a former OpenAI board member, speaking about Altman's temporary ouster in November, said he gave the board "inaccurate information about the small number of formal safety processes that the company did have in place" on multiple occasions. That month, OpenAI decided to disband its team focused on the long-term risks of AI just a year after announcing the group. The team's leaders, Ilya Sutskever and Jan Leike, announced their departures from OpenAI in May. Leike wrote in a post on X that OpenAI's "safety culture and processes have taken a backseat to shiny products."
[2]
OpenAI's new safety committee has some suggestions
Three months after its formation, OpenAI's new Safety and Security Committee is now an independent board oversight committee, and has made its initial safety and security recommendations for OpenAI's projects, according to a post on the company's website. Zico Kolter, director of the machine learning department at Carnegie Mellon's School of Computer Science, will chair the board, OpenAI said. The board also includes Quora co-founder and chief executive Adam D'Angelo, retired U.S. Army general Paul Nakasone, and Nicole Seligman, former executive vice president of Sony Corporation (SONY). OpenAI announced the Safety and Security Committee in May, after disbanding its Superalignment team, which was dedicated to controlling AI's existential dangers. Ilya Sutskever and Jan Leike, the Superalignment team's co-leads, both resigned from the company before its disbandment. The committee reviewed OpenAI's safety and security criteria and the results of safety evaluations for its newest AI models that can "reason," o1-preview, before before it was launched, the company said. After conducting a 90-day review of OpenAI's security measures and safeguards, the committee has made recommendations in five key areas that the company says it will implement. Here's what OpenAI's newly independent board oversight committee is recommending the AI startup do as it continues developing and deploying its models. OpenAI's leaders will have to brief the committee on safety evaluations of its major model releases, such as it did with o1-preview. The committee will also be able to exercise oversight over OpenAI's model launches alongside the full board, meaning it can delay the release of a model until safety concerns are resolved. This recommendation is likely an attempt to restore some confidence in the company's governance after OpenAI's board attempted to overthrow chief executive Sam Altman in November. Altman was ousted, the board said, because he "was not consistently candid in his communications with the board." Despite a lack of transparency about why exactly he was fired, Altman was reinstated days later. OpenAI said it will add more staff to make "around-the-clock" security operations teams and continue investing in security for its research and product infrastructure. After the committee's review, the company said it found ways to collaborate with other companies in the AI industry on security, including by developing an Information Sharing and Analysis Center to report threat intelligence and cybersecurity information. In February, OpenAI said it found and shut down OpenAI accounts belonging to "five state-affiliated malicious actors" using AI tools, including ChatGPT, to carry out cyberattacks. "These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks," OpenAI said in a statement. OpenAI said its "findings show our models offer only limited, incremental capabilities for malicious cybersecurity tasks." While it has released system cards detailing the capabilities and risks of its latest models, including for GPT-4o and o1-preview, OpenAI said it plans to find more ways to share and explain its work around AI safety. The startup said it developed new safety training measures for o1-preview's reasoning abilities, adding that the models were trained "to refine their thinking process, try different strategies, and recognize their mistakes." For example, in one of OpenAI's "hardest jailbreaking tests," o1-preview scored higher than GPT-4. OpenAI said it wants more safety assessments of its models done by independent groups, adding that it is already collaborating with third-party safety organizations and labs that are not affiliated with the government. The startup is also working with the AI Safety Institutes in the U.S. and U.K. on research and standards. In August, OpenAI and Anthropic reached an agreement with the U.S. government to allow it access to new models before and after public release. As its models become more complex (for example, it claims its new model can "think"), OpenAI said it is building onto its previous practices for launching models to the public and aims to have an established integrated safety and security framework. The committee has the power to approve the risk assessments OpenAI uses to determine if it can launch its models. Helen Toner, one of OpenAI's former board members who was involved in Altman's firing, has said one of her main concerns with the leader was his misleading of the board "on multiple occasions" of how the company was handling its safety procedures. Toner resigned from the board after Altman returned as chief executive.
[3]
OpenAI's new safety board has more power and no Sam Altman
The company's new board now has the power to pump the brakes on new AI model launches. OpenAI has announced significant changes to its safety and security practices, including the establishment of a new independent board oversight committee. This move comes with a notable shift: CEO Sam Altman is no longer part of the safety committee, marking a departure from the previous structure. The newly formed Safety and Security Committee (SSC) will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University. Other key members include Quora CEO Adam D'Angelo, retired US Army General Paul Nakasone, and Nicole Seligman, former EVP and General Counsel of Sony Corporation. This new committee replaces the previous Safety and Security Committee that was formed in June 2024, which included Altman among its members. The original committee was tasked with making recommendations on critical safety and security decisions for OpenAI projects and operations. The SSC's responsibilities now extend beyond recommendations. It will have the authority to oversee safety evaluations for major model releases and exercise oversight over model launches. Crucially, the committee will have the power to delay a release until safety concerns are adequately addressed. This restructuring follows a period of scrutiny regarding OpenAI's commitment to AI safety. The company has faced criticism in the past for disbanding its Superalignment team and the departures of key safety-focused personnel. The removal of Altman from the safety committee appears to be an attempt to address concerns about potential conflicts of interest in the company's safety oversight. OpenAI's latest safety initiative also includes plans to enhance security measures, increase transparency about their work, and collaborate with external organizations. The company has already reached agreements with the US and UK AI Safety Institutes to collaborate on researching emerging AI safety risks and standards for trustworthy AI.
[4]
Sam Altman is off of OpenAI's new safety board
OpenAI is making big changes to its Safety and Security Committee, which oversees the safety of AI as its capabilities grow, announcing that CEO Sam Altman will no longer be a member of the group. Instead, the new safety committee will be run by an independent board -- and will see its powers grow from just making recommendations to having the authority to supervise safety evaluations for new AI models or delay any new launches until safety concerns are addressed. "We're committed to continuously improving our approach to releasing highly capable and safe models, and value the crucial role the Safety and Security Committee will play in shaping OpenAI's future," the company said in a statement. The new committee will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University. Other members include Quora CEO Adam D'Angelo, retired US Army General Paul Nakasone, and former EVP and General Counsel of Sony Nicole Seligman. It's a big change from just a few months ago, when Altman announced he would lead a new safety board, just weeks after he dismantled the company's original one. The removal of Altman was seemingly made to address concerns about potential conflicts of interest. That decision followed the exit of several key members of the original safety committee, including co-founders Ilya Sutskever and Jan Leike. Leike was especially critical of OpenAI in his departure, accusing the company of neglecting "safety culture and processes" in favor of "shiny products". He chose to depart the company, he said at the time, because he had "been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point."
[5]
Why Sam Altman Is Leaving OpenAI's Safety Committee
OpenAI's CEO Sam Altman is stepping down from the internal committee that the company created to advise its board on "critical safety and security" decisions amid the race to develop ever more powerful artificial intelligence technology. The committee, formed in May, had been evaluating OpenAI's processes and safeguards over a 90-day period. OpenAI published the committee's recommendations following the assessment on Sept. 16. First on the list: establishing independent governance for safety and security. As such, Altman, who, in addition to serving OpenAI's board, oversees the company's business operations in his role as CEO, will no longer serve on the safety committee. In line with the committee's recommendations, OpenAI says the newly independent committee will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University, who joined OpenAI's board in August. Other members of the committee will include OpenAI board members Quora co-founder and CEO Adam D'Angelo, retired U.S. Army General Paul Nakasone, and former Sony Entertainment president Nicole Seligman. Along with Altman, OpenAI's board chair Bret Taylor and several of the company's technical and policy experts will also step down from the committee. Read more: The TIME100 Most Influential People in AI 2024 The committee's other recommendations include enhancing security measures, being transparent about OpenAI's work, and unifying the company's safety frameworks. It also said it would explore more opportunities to collaborate with external organizations, like those used to evaluate OpenAI's recently released series of reasoning models o1 for dangerous capabilities. The Safety and Security Committee is not OpenAI's first stab at creating independent oversight. OpenAI's for-profit arm, created in 2019, is controlled by a non-profit entity with a "majority independent" board, tasked with ensuring it acts in accordance with its mission of developing safe broadly beneficial artificial general intelligence (AGI) -- a system that surpasses humans in most regards. In November, OpenAI's board fired Altman, saying that he had not been "consistently candid in his communications with the board, hindering its ability to exercise its responsibilities." After employees and investors revolted -- and board member and company president Greg Brockman resigned -- he was swiftly reinstated as CEO, and board members Helen Toner, Tasha McCauley, and Ilya Sutskever resigned. Brockman later returned as president of the company. Read more: A Timeline of All the Recent Accusations Leveled at OpenAI and Sam Altman The incident highlighted a key challenge for the rapidly growing company. Critics including Toner and McCauley argue that having a formally independent board isn't enough of a counterbalance to the strong profit incentives the company faces. Earlier this month, Reuters reported that OpenAI's ongoing fundraising efforts, which could catapult its valuation to $150 billion, might hinge on changing its corporate structure. Toner and McCauley say board independence doesn't go far enough and that governments must play an active role in regulating AI. "Even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable," the former board members wrote in the Economist in May, reflecting on OpenAI's November boardroom debacle. In the past, Altman has urged regulation of AI systems, but OpenAI also lobbied against California's AI bill, which would mandate safety protocols for developers. Going against the company's position, more than 30 current and former OpenAI employees have publicly supported the bill. The Safety and Security Committee's establishment in late May followed a particularly tumultuous month for OpenAI. Ilya Sutskever and Jan Leike, the two leaders of the company's "superalignment" team, which focused on ensuring that if AI systems surpass human-level intelligence, they remain under human control, resigned. Leike accused OpenAI of prioritizing "shiny products" over safety in a post on X. The team was disbanded following their departure. The same month, OpenAI came under fire for asking departing employees to sign agreements that prevented them from criticizing the company or forfeit their vested equity. (OpenAI later said that these provisions had not and would not be enforced and that they would be removed from all exit paperwork going forward).
[6]
Sam Altman departs OpenAI's safety committee | TechCrunch
OpenAI CEO Sam Altman is leaving the internal commission OpenAI created in May to oversee "critical" safety decisions related to the company's projects and operations. In a blog post today, OpenAI said the committee, the Safety and Security Committee, will become an "independent" board oversight group chaired by Carnegie Mellon professor Zico Kolter, Quora CEO Adam D'Angelo, retired U.S. army general Paul Nakasone and ex-Sony EVP Nicole Seligman. All are existing members of OpenAI's board of directors. OpenAI noted in its post that the commission conducted a safety review of o1, OpenAI's latest AI model -- albeit while Altman was still a chair. The group will continue to receive "regular briefings" from OpenAI safety and security teams, said the company, and retain the power to delay releases until safety concerns are addressed. "As part of its work, the Safety and Security Committee ... will continue to receive regular reports on technical assessments for current and future models, as well as reports of ongoing post-release monitoring," OpenAI wrote in the post. "[W]e are building upon our model launch processes and practices to establish an integrated safety and security framework with clearly defined success criteria for model launches." Altman's departure from the Safety and Security Committee comes after five U.S. senators raised questions about OpenAI's policies in a letter addressed to Altman this summer. Nearly half of the OpenAI staff that once focused on AI's long-term risks have left, and ex-OpenAI researchers have accused Altman of opposing "real" AI regulation in favor of policies that advance OpenAI's corporate aims. To their point, OpenAI has dramatically increased its expenditures on federal lobbying, budgeting $800,000 for the first six months of 2024 versus $260,000 for all of last year. Altman also earlier this spring joined the U.S. Department of Homeland Security's Artificial Intelligence Safety and Security Board, which provides recommendations for the development and deployment of AI throughout U.S. critical infrastructure. Even with Altman removed, there's little to suggest the Safety and Security Committee would make difficult decisions that seriously impact OpenAI's commercial roadmap. Tellingly, OpenAI said in May that it would look to address "valid criticisms" of its work via the commission -- "valid criticisms" being in the eye of the beholder, of course. In an op-ed for The Economist in May, ex-OpenAI board members Helen Toner and Tasha McCauley said that they don't think OpenAI as it exists today can be trusted to hold itself accountable. "[B]ased on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives," they wrote. And OpenAI's profit incentives are growing. The company is rumored to be in the midst of raising $6.5+ billion in a funding round that'd value OpenAI at over $150 billion. To cinch the deal, OpenAI will likely abandon its hybrid nonprofit corporate structure, which sought to cap investors' returns in part to ensure OpenAI remained aligned with its founding mission: developing artificial general intelligence that "benefits all of humanity."
[7]
Sam Altman steps down as head of OpenAI's safety group
OpenAI 's internal safety committee is becoming an independent oversight body, as CEO Sam Altman leaves the group's top leadership role. The announcement came as part of the results of a 90-day evaluation by the existing Altman-chaired Safety and Security Committee into the company's AI safeguards and governance. The committee's new leadership includes Zico Kolter (Carnegie Mellon University), as well as existing members Adam D'Angelo (Quora) and Nicole Seligman (formerly Sony). "The Safety and Security Committee will be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed," the announcement reads. In May, the AI company launched a revamped safety and security team with Altman in the top seat, following the dissolution of its former security body and the loss of more than half of its relevant employees. Multiple former employees have raised concerns about Altman's leadership and OpenAI's safety protocols. Last week, Altman told OpenAI employees that the company plans to overhaul its non-profit corporate structure in favor of a for-profit push. The company, the CEO said, had "outgrown" its existence as a capped for-profit LLC overseen by a non-profit board, but the shift was also reportedly an essential element to the AI startup's anticipated $150 billion valuation.
[8]
OpenAI CEO Sam Altman quits safety committee after mounting criticism - Times of India
OpenAI CEO Sam Altman has stepped down from the company's internal Safety and Security Committee, following increased scrutiny from US lawmakers and criticism from former employees over the company's approach to AI safety and regulation. The committee, established in May to oversee critical safety decisions, will now become an independent oversight board with the power to delay releases of new AI models, OpenAI announced Monday. This newly empowered committee will be chaired by Carnegie Mellon professor Zico Kolter and include Quora CEO Adam D'Angelo, retired U.S. Army General Paul Nakasone, and former Sony executive Nicole Seligman. All are existing members of OpenAI's board of directors. What OpenAI said on committee's new leadership According to OpenAI's blog post, the committee will "be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed." The restructuring follows a 90-day review of OpenAI's safety and security practices. The committee has already conducted a safety review of OpenAI's latest AI model, o1, after Altman's departure. Altman's exit from the committee comes after five US senators raised questions about OpenAI's policies in a letter addressed to him this summer. Nearly half of the OpenAI staff that once focused on AI's long-term risks have left, and ex-OpenAI researchers have accused Altman of opposing "real" AI regulation in favour of policies that advance OpenAI's corporate aims. OpenAI is reportedly seeking to raise over $6.5 billion in funding at a valuation exceeding $150 billion. There are rumours the company may abandon its hybrid nonprofit structure to allow for greater investor returns, potentially moving further from its founding mission of developing beneficial AI for humanity. The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk's news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.
[9]
OpenAI's safety committee to oversee security practices as an independent body
Sept 16 (Reuters) - Microsoft-backed OpenAI said on Monday its safety committee will oversee security and safety processes for the company's artificial intelligence model development and deployment, as an independent body. The change follows the committee's own recommendations to OpenAI's board which were made public for the first time. OpenAI, the company behind the viral chatbot ChatGPT, formed its Safety and Security Committee this May to evaluate and further develop the company's existing safety practices. Advertisement · Scroll to continue The launch of ChatGPT in late 2022 sparked a significant wave of interest and excitement around AI. The buzz around ChatGPT highlighted both the opportunities and challenges of AI, prompting discussions on ethical use and potential biases. As part of the committee's recommendations, OpenAI said it is evaluating the development of an "Information Sharing and Analysis Center (ISAC) for the AI industry, to enable the sharing of threat intelligence and cybersecurity information among entities within the AI sector." Advertisement · Scroll to continue The independent committee will be chaired by Zico Kolter, professor and director of the machine learning department at Carnegie Mellon University, who is part of OpenAI's board. "We are pursuing expanded internal information segmentation, additional staffing to deepen around-the-clock security operations teams," according to OpenAI. The company also said it will work toward becoming more transparent about the capabilities and risks of its AI models. Last month, OpenAI signed a deal with the United States government for research, testing and evaluation of the company's AI models. Reporting by Harshita Mary Varghese in Bengaluru; Editing by Shounak Dasgupta Our Standards: The Thomson Reuters Trust Principles., opens new tab
[10]
OpenAI's safety committee to oversee security practices as an independent body
(Reuters) - Microsoft-backed OpenAI said on Monday its safety committee will oversee security and safety processes for the company's artificial intelligence model development and deployment, as an independent body. The change follows the committee's own recommendations to OpenAI's board which were made public for the first time. OpenAI, the company behind the viral chatbot ChatGPT, formed its Safety and Security Committee this May to evaluate and further develop the company's existing safety practices. The launch of ChatGPT in late 2022 sparked a significant wave of interest and excitement around AI. The buzz around ChatGPT highlighted both the opportunities and challenges of AI, prompting discussions on ethical use and potential biases. As part of the committee's recommendations, OpenAI said it is evaluating the development of an "Information Sharing and Analysis Center (ISAC) for the AI industry, to enable the sharing of threat intelligence and cybersecurity information among entities within the AI sector." The independent committee will be chaired by Zico Kolter, professor and director of the machine learning department at Carnegie Mellon University, who is part of OpenAI's board. "We are pursuing expanded internal information segmentation, additional staffing to deepen around-the-clock security operations teams," according to OpenAI. The company also said it will work toward becoming more transparent about the capabilities and risks of its AI models. Last month, OpenAI signed a deal with the United States government for research, testing and evaluation of the company's AI models. (Reporting by Harshita Mary Varghese in Bengaluru; Editing by Shounak Dasgupta)
[11]
Sam Altman is leaving a key OpenAI board. His departure should satisfy some big critics.
This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Log in. The committee was first formed in May, following a series of high-level departures who publicly worried that the ChatGPT maker could not govern itself and was no longer acting responsibly about artificial intelligence. When the group was announced, Altman, Taylor, and five OpenAI technical and policy experts were named to the committee, alongside the independent board members. The reworked committee is now chaired by Zico Kolter, a professor at Carnegie Mellon University, and includes Quora co-founder and CEO Adam D'Angelo, retired US Army General Paul Nakasone, and Nicole Seligman, the former general counsel of Sony Corporation, according to a company blog post. All four are also on OpenAI's board of directors. The safety committee will "exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed," the blog post said. The post also said that the committee reviewed the safety assessment of o1, a series of AI models "designed to spend more time thinking before they respond." Last month, the company battled to stop an AI safety bill in California, saying it would stifle progress and drive companies out of the state. Former OpenAI employees said that the company's opposition to the bill was disappointing -- but in line with its recent path. "We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing," former OpenAI researchers, William Saunders and Daniel Kokotajlo, wrote in the letter. "But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems." The company has found itself in the crosshairs of other recent scrutiny, too. In July, whistleblowers at OpenAI contacted the Securities and Exchange Commission, calling for the agency to investigate the company for rule violations around NDAs. Weeks before that, nine current and former OpenAI employees signed an open letter pointing out the risks of generative AI. OpenAI's corporate governance structure has confused Silicon Valley. In 2019, the company moved from a nonprofit to a capped profit structure, and recent reports said that Altman may turn it into a regular, for-profit company after all. Last week, Bloomberg reported that OpenAI is raising money at a $150 billion valuation -- more than the market capitalization of over 88% of S&P 500 firms, including Goldman Sachs, Uber, and BlackRock.
[12]
OpenAI's safety committee to oversee security practices as an independent body
OpenAI, the company behind the viral chatbot ChatGPT, formed its Safety and Security Committee this May to evaluate and further develop the company's existing safety practices. The launch of ChatGPT in late 2022 sparked a significant wave of interest and excitement around AI. The buzz around ChatGPT highlighted both the opportunities and challenges of AI, prompting discussions on ethical use and potential biases. As part of the committee's recommendations, OpenAI said it is evaluating the development of an "Information Sharing and Analysis Center (ISAC) for the AI industry, to enable the sharing of threat intelligence and cybersecurity information among entities within the AI sector." The independent committee will be chaired by Zico Kolter, professor and director of the machine learning department at Carnegie Mellon University, who is part of OpenAI's board. "We are pursuing expanded internal information segmentation, additional staffing to deepen around-the-clock security operations teams," according to OpenAI. The company also said it will work toward becoming more transparent about the capabilities and risks of its AI models. Last month, OpenAI signed a deal with the United States government for research, testing and evaluation of the company's AI models. (Reporting by Harshita Mary Varghese in Bengaluru; Editing by Shounak Dasgupta)
[13]
OpenAI forms Independent Safety Oversight, Altman departs SSC
OpenAI CEO Sam Altman will no longer be a part of the company's Safety and Security Committee (SSC), as per a blog post on September 16, 2024. The SSC will now function as an independent Board oversight committee chaired by Director of the Machine Learning Department at the School of Computer Science at Carnegie Mellon University Zico Kolter, Quora co-founder and CEO Adam D'Angelo, retired US Army General Paul Nakasone, and former EVP and General Counsel of Sony Corporation Nicole Seligman. What is the SSC? The Committee was established in May this year to make recommendations to the Board on critical safety and security decisions. The SSC and the Board reviewed the safety assessment of the o1 release. As per the latest updates, company leadership will brief the SSC on safety evaluations for further major model releases, and will exercise oversight over model launches. The SSC will also have the authority to delay a release until safety concerns are addressed. "As one of its initial mandates, the Safety and Security Committee conducted a 90-day review of safety and security-related processes and safeguards and made recommendations to the full Board. Following the full Board's review, we are now sharing the Safety and Security Committee's recommendations across five key areas, which we are adopting," said the company blog post. Chief among these recommendations was the need to establish an independent governance for safety and security. Other suggestions were: Enhancement of security measures; transparency about work; Collaboration with external organizations; Unifying safety frameworks for model development and monitoring. The SSC was formed after many of the previous OpenAI safety department members left, alleging inadequate safety measures in OpenAI's processes. The exiting members included individuals like co-founder and chief scientist Ilya Sutskever, and Head of Alignment Jan Leike. The two were co-leaders of OpenAI's Superalignment team responsible for safety and ensuring "AI systems much smarter than humans follow human intent." After leaving, Leike stated that he had reached a "breaking point" with his department struggling to get "crucial research done." He accused OpenAI of prioritising shiny products over "safety culture and processes." This final allegation even raised concerns a group of US senators in an open letter addressed to Altman.
[14]
OpenAI's safety committee to oversee security practices as independent body
Microsoft-backed OpenAI said on Monday its safety committee will oversee security and safety processes for the company's artificial intelligence model development and deployment, as an independent body. The change follows the committee's own recommendations to OpenAI's board which were made public for the first time. Click here to connect with us on WhatsApp OpenAI, the company behind the viral chatbot ChatGPT, formed its Safety and Security Committee this May to evaluate and further develop the company's existing safety practices. The launch of ChatGPT in late 2022 sparked a significant wave of interest and excitement around AI. The buzz around ChatGPT highlighted both the opportunities and challenges of AI, prompting discussions on ethical use and potential biases. As part of the committee's recommendations, OpenAI said it is evaluating the development of an "Information Sharing and Analysis Center (ISAC) for the AI industry, to enable the sharing of threat intelligence and cybersecurity information among entities within the AI sector." More From This Section Suspect in assassination attempt on Trump was near golf course for 12 hours J&J gets $260 million talc verdict overturned in Oregon, new trial ordered US inflation at turning point, focus should shift to jobs: WH's Brainard Strike could cost Boeing $100 million-plus in daily revenue, say analysts Big oil companies defeat US consumer lawsuit over production, prices The independent committee will be chaired by Zico Kolter, professor and director of the machine learning department at Carnegie Mellon University, who is part of OpenAI's board. "We are pursuing expanded internal information segmentation, additional staffing to deepen around-the-clock security operations teams," according to OpenAI. The company also said it will work toward becoming more transparent about the capabilities and risks of its AI models. Last month, OpenAI signed a deal with the United States government for research, testing and evaluation of the company's AI models. (Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.) Also Read Tech wrap Sep 13: iPhone 16 series pre-order, OpenAI o1-series models, more OpenAI unveils o1-series AI models: What are they, how they work, and more OpenAI presents new series of AI models with 'reasoning' abilities OpenAI fundraising set to catapult startup's valuation to $150 billion OpenAI plans to release 'Strawberry' for ChatGPT in 2 weeks: Report
[15]
ChatGPT Parent OpenAI Establishes Oversight Committee For Safety Amid Recent Fears Of Biological Weapons Misuse Via o1 Model
ChatGPT-parent OpenAI has announced an independent board oversight committee focused on safety and security. It was established in response to ongoing security process controversies. What Happened: In a blog post on Monday, OpenAI revealed that the Safety and Security Committee, which was formed in May, will now function as an independent board oversight committee. The committee will be led by Zico Kolter, the director of the machine learning department at Carnegie Mellon University's School of Computer Science. Other members include Adam D'Angelo, an OpenAI board member and co-founder of Quora, former NSA chief and board member Paul Nakasone, and Nicole Seligman, former executive vice president at Sony. See Also: OpenAI's 'O1' Model, Nvidia's AI Demand, Google's Missed Opportunity, And More: This Week In AI The committee's mandate is to supervise "the safety and security processes guiding OpenAI's model deployment and development." It has completed a 90-day review of OpenAI's processes and safeguards, making recommendations to the board. This includes establishing independent governance for safety and security, enhancing security measures, promoting transparency about OpenAI's work, collaborating with external organizations, and unifying the company's safety frameworks. The committee will also oversee model launches, including the authority to delay a release until safety concerns are addressed. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: OpenAI has faced controversy and high-level employee departures, with concerns that the company is growing too quickly to operate safely. Currently, OpenAI is also pursuing a funding round that could value the company at over $150 billion. Thrive Capital is leading the round with a planned investment of $1 billion, with Tiger Global also expected to join. Microsoft, Nvidia, and Apple are reportedly in talks to invest. The AI startup has also launched a new model, called "o1," which is focused on reasoning and "solving hard problems." Earlier this month, OpenAI acknowledged that o1 could potentially be misused for creating biological weapons, rating it as having a "medium risk" for issues related to chemical, biological, radiological, and nuclear (CBRN) weapons. Image via Shutterstock Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Google Co-Founder Sergey Brin Admits Company Was 'Too Timid' To Rollout ChatGPT-Like LLMs Despite Basically Inventing Them Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
OpenAI has announced the creation of a new independent board to oversee the safety and ethical implications of its AI technologies. This move comes as the company aims to address growing concerns about AI development and its potential risks.
In a significant move to address concerns surrounding artificial intelligence development, OpenAI has announced the establishment of an independent board oversight committee focused on safety 1. This new committee, comprised of experts from various fields, will have unprecedented authority to guide and potentially veto the company's AI projects 2.
The safety board consists of seven members, carefully selected for their expertise in areas such as ethics, technology, and policy. Notably, OpenAI CEO Sam Altman will not be a part of this committee, marking a clear separation between the company's leadership and the oversight body 3.
The committee has been granted substantial powers, including the ability to halt or delay AI model releases if safety concerns are identified. This level of authority is unprecedented in the AI industry and demonstrates OpenAI's commitment to responsible development 4.
The creation of this safety board comes in response to growing public and regulatory concerns about the potential risks associated with advanced AI systems. OpenAI's move is seen as a proactive step to ensure that its AI technologies are developed and deployed responsibly 5.
This initiative is expected to set a new standard for AI governance in the industry. Other major tech companies and AI developers may feel pressure to adopt similar oversight mechanisms to maintain public trust and regulatory compliance 2.
While the establishment of the safety board has been largely welcomed, some critics argue that it may slow down innovation at OpenAI. There are concerns that the additional layer of oversight could potentially hinder the company's ability to compete in the fast-paced AI market 4.
Others question whether the board will have true independence, given that its members are still appointed by OpenAI. The effectiveness of the committee in balancing safety concerns with the company's commercial interests remains to be seen 3.
The creation of this safety board marks a significant shift in AI governance. It sets a precedent for how AI companies might self-regulate in the future, potentially influencing upcoming AI regulations and policies worldwide 5.
As AI continues to advance rapidly, the role of this committee in shaping OpenAI's development trajectory will be closely watched by industry observers, policymakers, and the public alike. The success or failure of this initiative could have far-reaching implications for the future of AI safety and ethics in the tech industry 1.
Reference
OpenAI, the creator of ChatGPT, has announced a partnership with the U.S. AI Safety Institute. The company commits to providing early access to its future AI models and emphasizes its dedication to AI safety in a letter to U.S. lawmakers.
3 Sources
3 Sources
OpenAI, the artificial intelligence research laboratory, has reassigned its AI safety leader, Aleksander Madry, to a new project. This move is part of a broader reorganization within the company, signaling potential shifts in its approach to AI safety and development.
7 Sources
7 Sources
OpenAI, the company behind ChatGPT, faces a significant leadership shakeup as several top executives, including CTO Mira Murati, resign. This comes as the company considers transitioning to a for-profit model and seeks new funding.
7 Sources
7 Sources
U.S. Senators are pressing OpenAI CEO Sam Altman for transparency on AI safety measures following whistleblower complaints. The demand comes as lawmakers seek to address potential risks associated with advanced AI systems.
4 Sources
4 Sources
OpenAI announces significant leadership changes, expanding COO Brad Lightcap's responsibilities while CEO Sam Altman focuses more on research and product development. The move aims to strengthen the company's global presence and partnerships in the rapidly evolving AI industry.
9 Sources
9 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved