Curated by THEOUTPOST
On Wed, 9 Oct, 4:04 PM UTC
5 Sources
[1]
OpenAI Says China-Linked Group Tried to Phish Its Employees
OpenAI said it took down some accounts from groups with links to China OpenAI said a group with apparent ties to China tried to carry out a phishing attack on its employees, reigniting concerns that bad actors in Beijing want to steal sensitive information from top US artificial intelligence companies. The AI startup said Wednesday that a suspected China-based group called SweetSpecter posed as a user of OpenAI's chatbot ChatGPT earlier this year and sent customer support emails to staff. The emails included malware attachments that, if opened, would have allowed SweetSpecter to take screenshots and exfiltrate data, OpenAI said, but the attempt was unsuccessful. "OpenAI's security team contacted employees who were believed to have been targeted in this spear phishing campaign and found that existing security controls prevented the emails from ever reaching their corporate emails," OpenAI said. The disclosure highlights the potential cybersecurity risks for leading AI companies as the US and China are locked in a high-stakes battle for Artificial Intelligence supremacy. In March, for example, a former Google engineer was charged with stealing AI trade secrets for a Chinese firm. China's government has repeatedly denied allegations by the US that organisations within the country perpetrate cyberattacks, accusing external parties of organising smear campaigns. OpenAI revealed the attempted phishing attack as part of its latest threat intelligence report, outlining its efforts to combat influence operations around the world. In the report, OpenAI said it took down accounts from groups with links to Iran and China that used AI for coding assistance, conducting research and other tasks. © 2024 Bloomberg LP
[2]
OpenAI Says China-Linked Group Tried to Phish Its Employees
OpenAI said a group with apparent ties to China tried to carry out a phishing attack on its employees, reigniting concerns that bad actors in Beijing want to steal sensitive information from top US artificial intelligence companies. The AI startup said Wednesday that a suspected China-based group called SweetSpecter posed as a user of OpenAI's chatbot ChatGPT earlier this year and sent customer support emails to staff. The emails included malware attachments that, if opened, would have allowed SweetSpecter to take screenshots and exfiltrate data, OpenAI said, but the attempt was unsuccessful. "OpenAI's security team contacted employees who were believed to have been targeted in this spear phishing campaign and found that existing security controls prevented the emails from ever reaching their corporate emails," OpenAI said. The disclosure highlights the potential cybersecurity risks for leading AI companies as the US and China are locked in a high-stakes battle for artificial intelligence supremacy. In March, for example, a former Google engineer was charged with stealing AI trade secrets for a Chinese firm. China's government has repeatedly denied allegations by the US that organizations within the country perpetrate cyberattacks, accusing external parties of organizing smear campaigns. OpenAI revealed the attempted phishing attack as part of its latest threat intelligence report, outlining its efforts to combat influence operations around the world. In the report, OpenAI said it took down accounts from groups with links to Iran and China that used AI for coding assistance, conducting research and other tasks.
[3]
OpenAI says China-linked group tried to phish its employees
A suspected China-based group posed as a user of ChatGPT and sent customer support emails to staff. OpenAI said a group with apparent ties to China tried to carry out a phishing attack on its employees, reigniting concerns that bad actors in Beijing want to steal sensitive information from top US artificial intelligence companies. The AI startup said Wednesday that a suspected China-based group called SweetSpecter posed as a user of OpenAI's chatbot ChatGPT earlier this year and sent customer support emails to staff. The emails included malware attachments that, if opened, would have allowed SweetSpecter to take screenshots and exfiltrate data, OpenAI said, but the attempt was unsuccessful. "OpenAI's security team contacted employees who were believed to have been targeted in this spear phishing campaign and found that existing security controls prevented the emails from ever reaching their corporate emails," OpenAI said. The disclosure highlights the potential cybersecurity risks for leading AI companies as the US and China are locked in a high-stakes battle for artificial intelligence supremacy. In March, for example, a former Google engineer was charged with stealing AI trade secrets for a Chinese firm. China's government has repeatedly denied allegations by the US that organizations within the country perpetrate cyberattacks, accusing external parties of organizing smear campaigns. OpenAI revealed the attempted phishing attack as part of its latest threat intelligence report, outlining its efforts to combat influence operations around the world. In the report, OpenAI said it took down accounts from groups with links to Iran and China that used AI for coding assistance, conducting research and other tasks.
[4]
OpenAI says Chinese gang tried to phish its staff
Claims its models aren't making threat actors more sophisticated - but is helping debug their code OpenAI has alleged the company disrupted a spear-phishing campaign that saw a China-based group target its employees through both their personal and corporate email addresses. The group, which OpenAI says is called SweetSpecter, sent phishing emails that contained a malicious attachment designed to deploy the SugarGh0st RAT malware. The malware had the capability to give the hacker group control over a compromised machine, allowing them to execute arbitrary commands, take screenshots, and exfiltrate data. OpenAI was tipped off of the campaign by what it called a "credible source," and banned associated accounts. The emails were blocked by the company's security systems before reaching the employees. "Throughout this process, our collaboration with industry partners played a key role in identifying these failed attempts to compromise employee accounts," stated [PDF] OpenAI. "This highlights the importance of threat intelligence sharing and collaboration in order to stay ahead of sophisticated adversaries in the age of AI." The company believes that SweetSpecter has also been using OpenAI's services for offensive cyber operations, including reconnaissance, vulnerability research, and scripting support. The ChatGPT-maker downplayed the use of its AI, writing that the threat actor's use of its models did not help it to develop novel capabilities that couldn't be sourced from public resources. The China phishing allegation was raised in a document titled "Influence and cyber operations: an update" in which OpenAI also claimed it has "disrupted more than 20 operations and deceptive networks from around the world that attempted to use our models." The firm's analysis of those efforts is that most "used our models to perform tasks in a specific, intermediate phase of activity - after they had acquired basic tools such as internet access, email addresses and social media accounts, but before they deployed 'finished' products such as social media posts or malware." "Activities ranged in complexity from simple requests for content generation, to complex, multi-stage efforts to analyze and reply to social media posts," detailed OpenAI. The document also found that threat actors "continue to evolve and experiment with our models" but OpenAI has not seen evidence that its tools allowed "meaningful breakthroughs in their ability to create substantially new malware or build viral audiences." But threat actors are finding other uses for Open AI. One threat actor - an outfit named "STORM-0817" - using its tools to debug their code. The AI outfit also "found and disrupted a cluster of ChatGPT accounts that were using the same infrastructure to try to answer questions and complete scripting and vulnerability research tasks." The model-maker has also observed attempts to use its tools to influence elections, usually by creating social media posts or news articles. OpenAI nipped some of those efforts in the bud, but none it saw gained a substantial audience. ®
[5]
OpenAI Says China-Backed Phishing Attempt Targeted Employees | PYMNTS.com
OpenAI says hackers connected to the Chinese government attempted a phishing attack on its employees. The artificial intelligence (AI) company revealed the attempt Wednesday (Oct. 9) in a report on its attempts to prevent "the disruptive uses" of its technology. Earlier this year, the report said, OpenAI disrupted a "suspected China-based threat actor" known as "SweetSpecter" as it attempted to spear phish its employees by posing as a ChaptGPT user seeking support. Spear phishing is a type of phishing scam that employs a more targeted approach to getting personal information via email. The report said that those support request emails contained malware designed to steal sensitive information. "OpenAI's security team contacted employees who were believed to have been targeted in this spear phishing campaign and found that existing security controls prevented the emails from ever reaching their corporate emails," the report said. In addition, the report also touches on propaganda efforts to use ChatGPT to influence elections around the world. For example, the company in August disrupted a "covert Iranian influence operation" that generated social media comments and long-form articles about the U.S. election, alongside topics such as the conflict in Gaza, Western policies toward Israel and political issues in Scotland and Venezuela. "So far this year, we have not observed any cases of election-related influence operations attracting viral engagement or building sustained audiences through their use of our models," the report added. The company in February revealed that, working with its partner and largest investor Microsoft, it had blocked five state-affiliated attacks: two connected to China, the others with ties to North Korea, Iran and Russia. OpenAI's latest report came two days after the National Security Agency (NSA) said it was taking part in a larger investigation into whether Chinese hackers have targeted American telecommunications companies. China's embassy in Washington has rejected this claim. This year has seen a number of reports on threats to U.S. critical infrastructure. For example, last month brought the news that a hacking campaign dubbed "Salt Typhoon" was targeting sensitive information and had breached some American internet service providers. This hack involved an incursion into U.S. broadband networks, with cybercriminals establishing a foothold with the network that gave them access to data stored by telecommunications companies or carry out cyberattacks. "This would be an alarming -- but not really surprising -- expansion of their malicious use of cyber to gain the upper hand over the United States," Glenn Gerstell, former general counsel at the NSA, told The Wall Street Journal in September.
Share
Share
Copy Link
OpenAI reveals a foiled phishing attack by a suspected China-based group, highlighting cybersecurity risks in the AI industry amid US-China tech rivalry.
OpenAI, the artificial intelligence powerhouse behind ChatGPT, has revealed a thwarted phishing attack targeting its employees. The company disclosed that a suspected China-based group, known as SweetSpecter, attempted to infiltrate OpenAI's systems earlier this year 1.
The attackers posed as ChatGPT users and sent customer support emails to OpenAI staff. These emails contained malicious attachments designed to deploy the SugarGh0st RAT malware, which could have allowed the hackers to take screenshots, exfiltrate data, and execute arbitrary commands on compromised machines 2.
OpenAI's security team promptly responded to the threat:
"OpenAI's security team contacted employees who were believed to have been targeted in this spear phishing campaign and found that existing security controls prevented the emails from ever reaching their corporate emails," the company stated 3.
This incident highlights the growing cybersecurity risks faced by leading AI companies as the United States and China engage in a high-stakes battle for artificial intelligence supremacy. It follows a pattern of cyber threats targeting the AI industry, including a case in March where a former Google engineer was charged with stealing AI trade secrets for a Chinese firm 1.
The phishing attempt was disclosed as part of OpenAI's latest threat intelligence report. The document outlines the company's efforts to combat influence operations globally and reveals that OpenAI has "disrupted more than 20 operations and deceptive networks from around the world that attempted to use our models" 4.
OpenAI noted that threat actors, including SweetSpecter, have been using their AI services for various offensive cyber operations:
However, the company downplayed the impact, stating that the use of their models did not significantly enhance the attackers' capabilities beyond what could be achieved with publicly available resources 4.
This incident occurs against a backdrop of increasing cyber threats to U.S. critical infrastructure. The National Security Agency (NSA) is currently investigating potential Chinese hacker targeting of American telecommunications companies 5.
China's government has consistently denied U.S. allegations of state-sponsored cyberattacks, accusing external parties of organizing smear campaigns 2.
As AI technology continues to advance, the incident underscores the critical need for robust cybersecurity measures and international cooperation to protect sensitive information and maintain the integrity of AI development.
Reference
[1]
[2]
[4]
OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.
15 Sources
15 Sources
OpenAI has banned multiple accounts for misusing ChatGPT in surveillance and influence campaigns, highlighting the ongoing challenge of preventing AI abuse while maintaining its benefits for legitimate users.
15 Sources
15 Sources
Barracuda researchers uncover a large-scale phishing campaign impersonating OpenAI, highlighting the growing intersection of AI and cybersecurity threats.
3 Sources
3 Sources
OpenAI, the leading AI research company, experiences a significant data breach. Simultaneously, the company faces accusations of breaking its promise to allow independent testing of its AI models.
2 Sources
2 Sources
OpenAI has taken its investigation of Chinese AI firm DeepSeek to U.S. government officials, citing concerns over potential unauthorized use of OpenAI's data in training DeepSeek's models. The controversy highlights tensions in AI development and intellectual property.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved