Curated by THEOUTPOST
On Wed, 24 Jul, 12:03 AM UTC
7 Sources
[1]
OpenAI reassigns AI safety leader Madry, The Information reports
OpenAI has reassigned its artificial intelligence (AI) safety leader Aleksander Madry to a "bigger role within the research organization", The Information reported on Tuesday. Madry set up the AI company's Preparedness team last year to evaluate AI models for "catastrophic risks" before making them available to the public, the report said. OpenAI researcher Tejal Patwardhan will now manage much of the work of the Preparedness team, the report added, citing a person familiar with the matter. Madry's reassignment comes at a time when OpenAI's chatbots, which can engage in human-like conversations and create videos and images based on text prompts, have become increasingly powerful and have stirred safety concerns. Earlier this year, the Microsoft-backed company formed a Safety and Security Committee to be led by board members, including CEO Sam Altman, in the run-up to its training of its next artificial intelligence model. OpenAI and Madry did not immediately respond to requests for comment. (Reporting by Gursimran Kaur in Bengaluru; Editing by Tasim Zahid)
[2]
OpenAI assigns new project to AI safety leader Madry in revamp
July 23 (Reuters) - OpenAI Chief Executive Sam Altman said on Tuesday the ChatGPT maker's AI safety leader Aleksander Madry was working on a new research project, as the startup rejigs the preparedness team. "Aleksander is working on a new and v(very) important research project," Altman said in a post on X, adding that OpenAI executives Joaquin Quinonero Candela and Lilian Weng will be taking over the preparedness team in the meanwhile. The preparedness team helps to evaluate artificial general intelligence readiness of the company's AI models, a spokesperson for OpenAI said in statement, adding Madry will take on a bigger role within the research organization following the move. Madry did not immediately respond to requests for comment. "Joaquin and Lilian are taking over the preparedness team as part of unifying our safety work", Altman wrote in the post. The Information, which was the first to report Madry's move, had said researcher Tejal Patwardhan will manage much of the work of the team. The moves come when OpenAI's chatbots, which can engage in human-like conversations and create videos and images based on text prompts, have become increasingly powerful and have stirred safety concerns. Earlier this year, the Microsoft-backed company formed a Safety and Security Committee to be led by board members, including CEO Sam Altman, in the run-up to its training of its next artificial intelligence model. (Reporting by Gursimran Kaur and Shreya Biswas in Bengaluru; Editing by Tasim Zahid and Sriraj Kalluvila)
[3]
OpenAI reassigns AI safety leader Madry, The Information reports
OpenAI researcher Tejal Patwardhan will now manage much of the work of the Preparedness team, the report added, citing a person familiar with the matter. Madry's reassignment comes at a time when OpenAI's chatbots, which can engage in human-like conversations and create videos and images based on text prompts, have become increasingly powerful and have stirred safety concerns. Earlier this year, the Microsoft-backed company formed a Safety and Security Committee to be led by board members, including CEO Sam Altman, in the run-up to its training of its next artificial intelligence model. OpenAI and Madry did not immediately respond to requests for comment. (Reporting by Gursimran Kaur in Bengaluru; Editing by Tasim Zahid)
[4]
OpenAI assigns new project to AI safety leader Aleksander Madry in revamp
"Aleksander is working on a new and v(very) important research project," Altman said in a post on X, adding that OpenAI executives Joaquin Quinonero Candela and Lilian Weng will be taking over the preparedness team in the meanwhile.OpenAI Chief Executive Sam Altman said on Tuesday the ChatGPT maker's AI safety leader Aleksander Madry was working on a new research project, as the startup rejigs the preparedness team. "Aleksander is working on a new and v(very) important research project," Altman said in a post on X, adding that OpenAI executives Joaquin Quinonero Candela and Lilian Weng will be taking over the preparedness team in the meanwhile. The preparedness team helps to evaluate artificial general intelligence readiness of the company's AI models, a spokesperson for OpenAI said in statement, adding Madry will take on a bigger role within the research organization following the move. Madry did not immediately respond to requests for comment. "Joaquin and Lilian are taking over the preparedness team as part of unifying our safety work", Altman wrote in the post. The Information, which was the first to report Madry's move, had said researcher Tejal Patwardhan will manage much of the work of the team. The moves come when OpenAI's chatbots, which can engage in human-like conversations and create videos and images based on text prompts, have become increasingly powerful and have stirred safety concerns. Earlier this year, the Microsoft-backed company formed a Safety and Security Committee to be led by board members, including CEO Sam Altman, in the run-up to its training of its next artificial intelligence model.
[5]
OpenAI assigns new project to AI safety leader Aleksander Madry in revamp - ET Telecom
OpenAI Chief Executive Sam Altman said on Tuesday the ChatGPT maker's AI safety leader Aleksander Madry was working on a new research project, as the startup rejigs the preparedness team. "Aleksander is working on a new and v(very) important research project," Altman said in a post on X, adding that OpenAI executives Joaquin Quinonero Candela and Lilian Weng will be taking over the preparedness team in the meanwhile. The preparedness team helps to evaluate artificial general intelligence readiness of the company's AI models, a spokesperson for OpenAI said in statement, adding Madry will take on a bigger role within the research organization following the move. Madry did not immediately respond to requests for comment. "Joaquin and Lilian are taking over the preparedness team as part of unifying our safety work", Altman wrote in the post. The Information, which was the first to report Madry's move, had said researcher Tejal Patwardhan will manage much of the work of the team. The moves come when OpenAI's chatbots, which can engage in human-like conversations and create videos and images based on text prompts, have become increasingly powerful and have stirred safety concerns. Earlier this year, the Microsoft-backed company formed a Safety and Security Committee to be led by board members, including CEO Sam Altman, in the run-up to its training of its next artificial intelligence model.
[6]
OpenAI assigns new project to AI safety leader Madry in revamp
The preparedness team helps to evaluate artificial general intelligence readiness of the company's AI models, a spokesperson for OpenAI said in statement, adding Madry will take on a bigger role within the research organization following the move. Madry did not immediately respond to requests for comment. "Joaquin and Lilian are taking over the preparedness team as part of unifying our safety work", Altman wrote in the post. The Information, which was the first to report Madry's move, had said researcher Tejal Patwardhan will manage much of the work of the team. The moves come when OpenAI's chatbots, which can engage in human-like conversations and create videos and images based on text prompts, have become increasingly powerful and have stirred safety concerns. Earlier this year, the Microsoft-backed company formed a Safety and Security Committee to be led by board members, including CEO Sam Altman, in the run-up to its training of its next artificial intelligence model. (Reporting by Gursimran Kaur and Shreya Biswas in Bengaluru; Editing by Tasim Zahid and Sriraj Kalluvila)
[7]
OpenAI removes AI safety executive Aleksander Madry from role
OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024. OpenAI last week removed Aleksander Madry, one of OpenAI's top safety executives, from his role and reassigned him to a job focused on AI reasoning, sources familiar with the situation confirmed to CNBC. Madry was OpenAI's head of preparedness, a team that was "tasked with tracking, evaluating, forecasting, and helping protect against catastrophic risks related to frontier AI models," according to a bio for Madry. Madry is also director of MIT's Center for Deployable Machine Learning and a faculty co-lead of the MIT AI Policy Forum, roles from which he is currently on leave, according to the university's website. The decision to reassign Madry came less than a week before a group of Democratic senators sent a letter to OpenAI CEO Sam Altman concerning "questions about how OpenAI is addressing emerging safety concerns." The letter, sent Monday and viewed by CNBC, also stated, "We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company's identification and mitigation of cybersecurity threats." OpenAI did not immediately respond to a request for comment. The lawmakers requested that OpenAI respond with a series of answers to specific questions about its safety practices and financial commitments by August 13. It's all part of a summer of mounting safety concerns and controversies surrounding OpenAI, which along with Google, Microsoft, Meta and other companies is at the helm of a generative AI arms race -- a market that is predicted to top $1 trillion in revenue within a decade -- as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors. Earlier this month, Microsoft gave up its observer seat on OpenAI's board, stating in a letter viewed by CNBC that it can now step aside because it's satisfied with the construction of the startup's board, which has been revamped in the eight months since an uprising that led to the brief ouster of CEO Sam Altman and threatened Microsoft's massive investment into OpenAI. But last month, a group of current and former OpenAI employees published an open letter describing concerns about the artificial intelligence industry's rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up. "AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this," the employees wrote at the time. Days after the letter was published, a source familiar to the mater confirmed to CNBC that the FTC and the Department of Justice were set to open antitrust investigations into OpenAI, Microsoft and Nvidia, focusing on the companies' conduct. FTC Chair Lina Khan has described her agency's action as a "market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers." The current and former employees wrote in the June letter that AI companies have "substantial non-public information" about what their technology can do, the extent of the safety measures they've put in place and the risk levels that technology has for different types of harm. "We also understand the serious risks posed by these technologies," they wrote, adding that the companies "currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily." In May, OpenAI decided to disband its team focused on the long-term risks of AI just one year after it announced the group, a person familiar with the situation confirmed to CNBC at the time. The person, who spoke on condition of anonymity, said some of the team members are being reassigned to other teams within the company. The team was disbanded after its leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the startup in May. Leike wrote in a post on X that OpenAI's "safety culture and processes have taken a backseat to shiny products." CEO Sam Altman said at the time on X he was sad to see Leike leave and that the company had more work to do. Soon after, OpenAI co-founder Greg Brockman posted a statement attributed to Brockman and Altman on X, asserting that the company has "raised awareness of the risks and opportunities of AGI so that the world can better prepare for it." "I joined because I thought OpenAI would be the best place in the world to do this research," Leike wrote on X at the time. "However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point." Leike wrote that he believes much more of the company's bandwidth should be focused on security, monitoring, preparedness, safety and societal impact. "These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," he wrote. "Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done." Leike added that OpenAI must become a "safety-first AGI company." "Building smarter-than-human machines is an inherently dangerous endeavor," he wrote at the time. "OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products."
Share
Share
Copy Link
OpenAI, the artificial intelligence research laboratory, has reassigned its AI safety leader, Aleksander Madry, to a new project. This move is part of a broader reorganization within the company, signaling potential shifts in its approach to AI safety and development.
OpenAI, the prominent artificial intelligence research laboratory, has made a significant change in its organizational structure by reassigning Aleksander Madry, its AI safety leader, to a new project 1. This move comes as part of a broader reorganization within the company, potentially signaling a shift in OpenAI's approach to AI safety and development.
Madry, a respected figure in the field of AI safety, has been tasked with leading a new project at OpenAI 2. While the specifics of this project remain undisclosed, it is believed to be part of OpenAI's ongoing efforts to advance AI technology while maintaining a focus on safety and ethical considerations.
The reassignment of Madry has led to changes within OpenAI's AI safety team. Jan Leike, who previously co-led the team alongside Madry, will now assume sole leadership of the group 3. This restructuring suggests a potential realignment of priorities and strategies within the organization's safety initiatives.
Despite the organizational changes, OpenAI maintains that AI safety remains a top priority for the company. The reassignment of Madry and the restructuring of the safety team are reportedly part of OpenAI's efforts to enhance its approach to AI development and safety 4.
This move by OpenAI comes at a time when the AI industry is under increasing scrutiny regarding the safety and ethical implications of advanced AI systems. The reassignment of a key figure like Madry could potentially influence the broader conversation around AI safety in the tech industry 5.
As OpenAI continues to evolve its organizational structure and research focus, industry observers will be keenly watching for any shifts in the company's approach to AI development and safety. The outcomes of Madry's new project and the performance of the restructured safety team under Leike's leadership will likely play crucial roles in shaping OpenAI's future trajectory in the rapidly advancing field of artificial intelligence.
Reference
[1]
[2]
OpenAI has announced the creation of a new independent board to oversee the safety and ethical implications of its AI technologies. This move comes as the company aims to address growing concerns about AI development and its potential risks.
15 Sources
15 Sources
OpenAI announces significant leadership changes, expanding COO Brad Lightcap's responsibilities while CEO Sam Altman focuses more on research and product development. The move aims to strengthen the company's global presence and partnerships in the rapidly evolving AI industry.
9 Sources
9 Sources
OpenAI, the creator of ChatGPT, has announced a partnership with the U.S. AI Safety Institute. The company commits to providing early access to its future AI models and emphasizes its dedication to AI safety in a letter to U.S. lawmakers.
3 Sources
3 Sources
OpenAI, the artificial intelligence research company, is experiencing significant changes in its leadership structure. CEO Sam Altman aims to flatten the organization and promote new leaders as the company considers transitioning to a for-profit model.
33 Sources
33 Sources
Several senior AI safety researchers have left OpenAI, citing shifts in company culture and concerns about the prioritization of AI safety in the development of advanced AI systems.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved