Curated by THEOUTPOST
On Sat, 13 Jul, 4:01 PM UTC
2 Sources
[1]
A hacker stole OpenAI secrets, raising fears that China could, too
SAN FRANCISCO -- Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company's artificial intelligence technologies. The hacker lifted details from discussions in an online forum where employees talked about OpenAI's latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its AI. OpenAI executives revealed the incident to employees during an all-hands meeting at the company's San Francisco offices in April 2023 and informed its board of directors, according to the two people, who discussed sensitive information about the company on the condition of anonymity. But the executives decided not to share the news publicly because no information about customers or partners had been stolen, the two people said. The executives did not consider the incident a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government. The company did not inform the FBI or anyone else in law enforcement. For some OpenAI employees, the news raised fears that foreign adversaries such as China could steal AI technology that -- while now mostly a work and research tool -- could eventually endanger U.S. national security. It also led to questions about how seriously OpenAI was treating security, and exposed fractures inside the company about the risks of AI. After the breach, Leopold Aschenbrenner, an OpenAI technical program manager focused on ensuring that future AI technologies do not cause serious harm, sent a memo to OpenAI's board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets. Aschenbrenner said OpenAI had fired him this spring for leaking other information outside the company and argued that his dismissal had been politically motivated. He alluded to the breach on a recent podcast, but details of the incident have not been previously reported. He said OpenAI's security wasn't strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company. "We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation," said an OpenAI spokesperson, Liz Bourgeois. Referring to the company's efforts to build artificial general intelligence, a machine that can do anything the human brain can do, she added, "While we share his commitment to building safe AGI, we disagree with many of the claims he has since made about our work. This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company." Fears that a hack of a U.S. technology company might have links to China are not unreasonable. Last month, Brad Smith, Microsoft's president, testified on Capitol Hill about how Chinese hackers used the tech giant's systems to launch a wide-ranging attack on federal government networks. However, under federal and California law, OpenAI cannot prevent people from working at the company because of their nationality, and policy researchers have said that barring foreign talent from U.S. projects could significantly impede the progress of AI in the United States. "We need the best and brightest minds working on this technology," Matt Knight, OpenAI's head of security, told The New York Times in an interview. "It comes with some risks, and we need to figure those out." OpenAI is not the only company building increasingly powerful systems using rapidly improving AI technology. Some of them -- most notably, Meta, the owner of Facebook and Instagram -- are freely sharing their designs with the rest of the world as open source software. They believe that the dangers posed by today's AI technologies are slim and that sharing code allows engineers and researchers across the industry to identify and fix problems. Today's AI systems can help spread disinformation online, including text, still images and, increasingly, videos. They are also beginning to take away some jobs. Companies like OpenAI and its competitors Anthropic and Google add guardrails to their AI applications before offering them to individuals and businesses, hoping to prevent people from using the apps to spread disinformation or cause other problems. But there is not much evidence that today's AI technologies are a significant national security risk. Studies by OpenAI, Anthropic and others over the past year showed that AI was not significantly more dangerous than search engines. Daniela Amodei, an Anthropic co-founder and the company's president, said its latest AI technology would not be a major risk if its designs were stolen or freely shared with others. "If it were owned by someone else, could that be hugely harmful to a lot of society? Our answer is, 'No, probably not,'" she told the Times last month. "Could it accelerate something for a bad actor down the road? Maybe. It is really speculative." Still, researchers and tech executives have long worried that AI could one day fuel the creation of new bioweapons or help break into government computer systems. Some even believe it could destroy humanity. A number of companies, including OpenAI and Anthropic, are already locking down their technical operations. OpenAI recently created a Safety and Security Committee to explore how it should handle the risks posed by future technologies. The committee includes Paul Nakasone, a former Army general who led the National Security Agency and Cyber Command. He has also been appointed to the OpenAI board of directors. "We started investing in security years before ChatGPT," Knight said. "We're on a journey not only to understand the risks and stay ahead of them but also to deepen our resilience." Federal officials and state lawmakers are also pushing toward government regulations that would ban companies from releasing certain AI technologies and fine them millions if their technologies caused harm. But experts say these dangers are still years or even decades away. Chinese companies are building systems of their own that are nearly as powerful as the leading U.S. systems. By some metrics, China eclipsed the United States as the biggest producer of AI talent, with the country generating almost half the world's top AI researchers. "It is not crazy to think that China will soon be ahead of the U.S.," said Clément Delangue, CEO of Hugging Face, a company that hosts many of the world's open source AI projects. Some researchers and national security leaders argue that the mathematical algorithms at the heart of current AI systems, while not dangerous today, could become dangerous and are calling for tighter controls on AI labs. "Even if the worst-case scenarios are relatively low-probability, if they are high-impact, then it is our responsibility to take them seriously," Susan Rice, former domestic policy adviser to President Joe Biden and former national security adviser for President Barack Obama, said during an event in Silicon Valley last month. "I do not think it is science fiction, as many like to claim." This article originally appeared in The New York Times.
[2]
Whistleblowers Say OpenAI Broke Promise to Rigorously Test AI for Danger Before Releasing
Tech leaders have warned about the potential dangers of the very AIs they're developing, while harping on about the need for regulation. The sincerity of this cautionary mien has always been suspect, however, and now there's more evidence to suggest that OpenAI, a leader in the space, hasn't been practicing what its CEO Sam Altman has been publicly preaching. Now, The Washington Post reports that members of OpenAI's safety team said they felt pressured to rush through testing "designed to prevent the technology from causing catastrophic harm" of its GPT-4 Omni large language model, which now powers ChatGPT -- all so the company could push out its product by its May launch date. In sum, they say, OpenAI treated GPT-4o's safeness as a foregone conclusion. "They planned the launch after-party prior to knowing if it was safe to launch," an anonymous individual familiar with the matter told WaPo. "We basically failed at the process." A venial sin, perhaps -- but one that reflects a seemingly flippant attitude towards safety by the company's leadership. These aren't the first people close to the company to sound the alarm. In June, a group of OpenAI insiders -- both current and former employees -- warned in an open letter that the company was skirting safety in favor of "recklessly" racing for dominance in the industry. They also claimed there was a culture of retaliation that led to safety concerns being silenced. This latest disclosure shows that OpenAI is failing to live up to the standards imposed by President Joe Biden's executive AI order, which laid out somewhat vague rules for how the industry's leaders, like Google and Microsoft -- which backs OpenAI -- should police themselves. The current practice is that companies conduct their own safety tests on their AI models, and then submit the results to the federal government for review. When testing GPT-4o, however, OpenAI squeezed its testing down into a single week, according to WaPo's sources. Employees protested, as they were well within their rights to -- for surely that wouldn't be enough time to rigorously test the model. OpenAI has downplayed these charges with specious language -- and it still comes off sounding a little guilty. Spokesperson Lindsey Held insisted that the company "didn't cut corners on our safety process," and merely acknowledged that the launch was "stressful" for employees. Meanwhile, an anonymous member of the company's preparedness team told the WaPo that there was enough time to complete the tests, thanks to "dry runs" conducted ahead of time, but admitted that the testing had been "squeezed." "I definitely don't think we skirted on [the tests]," the representative added. "After that, we said, 'Let's not do it again.'" A mark of trust in the process if there ever was one.
Share
Share
Copy Link
OpenAI, the leading AI research company, experiences a significant data breach. Simultaneously, the company faces accusations of breaking its promise to allow independent testing of its AI models.
In a shocking turn of events, OpenAI, the company behind the revolutionary ChatGPT, has fallen victim to a major security breach. On July 4th, 2024, hackers successfully infiltrated OpenAI's systems, gaining access to vast amounts of sensitive data 1. The breach has sent shockwaves through the tech industry, raising serious questions about the security measures in place at one of the world's leading AI research companies.
According to sources close to the investigation, the hackers managed to access a wide range of information, including:
The full extent of the breach is still being determined, but initial reports suggest that millions of users' data may have been compromised 1.
This security breach has far-reaching implications for the AI industry as a whole. It highlights the vulnerability of even the most advanced tech companies to cyber attacks. Experts warn that the stolen information could be used to replicate OpenAI's technology or to exploit weaknesses in AI systems worldwide.
Amidst the chaos of the security breach, OpenAI finds itself embroiled in another controversy. The company is facing accusations of reneging on its promise to allow independent testing of its AI models 2.
When OpenAI transitioned from a non-profit to a for-profit entity, it made a public commitment to maintain transparency and allow external researchers to test its models. However, recent reports suggest that the company has been less than cooperative in fulfilling this promise 2.
Independent testing is crucial for ensuring the safety and reliability of AI systems. It helps identify potential biases, security flaws, and unintended consequences that internal teams might overlook. OpenAI's apparent reluctance to allow such testing has raised eyebrows in the scientific community and among ethics watchdogs.
In response to both the security breach and the ethical concerns, OpenAI has issued a statement pledging to conduct a thorough investigation and to reassess its security protocols. The company has also promised to address the concerns regarding independent testing, though specific details remain vague 1 2.
As OpenAI grapples with these dual crises, the tech world watches closely. The outcome of these events could have significant implications for the future of AI development, cybersecurity practices, and the ethical standards to which AI companies are held. The coming weeks will be crucial in determining how OpenAI navigates these turbulent waters and what lessons the broader tech industry can learn from this incident.
Reference
[1]
Miles Brundage, ex-OpenAI policy researcher, accuses the company of rewriting its AI safety history, sparking debate on responsible AI development and deployment strategies.
3 Sources
3 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
U.S. Senators are pressing OpenAI CEO Sam Altman for transparency on AI safety measures following whistleblower complaints. The demand comes as lawmakers seek to address potential risks associated with advanced AI systems.
4 Sources
4 Sources
Exploring the potential of AI in combating pandemics while addressing concerns about its misuse in bioterrorism. Experts weigh in on the delicate balance between technological advancement and global security.
2 Sources
2 Sources
OpenAI, the creator of ChatGPT, has announced a partnership with the U.S. AI Safety Institute. The company commits to providing early access to its future AI models and emphasizes its dedication to AI safety in a letter to U.S. lawmakers.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved