Curated by THEOUTPOST
On Sat, 11 Jan, 4:02 PM UTC
2 Sources
[1]
OpenAI Cuts Off Engineer Who Used API to Create Voice-Controlled Gun
(Credit: Thomas Fuller/SOPA Images/LightRocket via Getty Images) OpenAI has cut off API access to an engineer who allegedly used the company's Realtime API to create a sentry machine gun that could respond to voice queries. The man, who calls himself STS 3D online, posted a viral video where he tells the robot: "ChatGPT, we're under attack from the front left and front right. Respond accordingly." The robot appears to then shoot blanks from a rifle to its left and right, before saying: "If you need any further assistance, just let me know." One user on the social network Reddit commented: "There's at least 3 movies explaining why this is a bad idea." An OpenAI spokesperson told Futurism, who first reported the news, that "we proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry." "OpenAI's Usage Policies prohibit the use of our services to develop or use weapons, or to automate certain systems that can affect personal safety," the company added. Rolled out in October 2024 in public beta, OpenAI's Realtime API allows developers to create natural speech-to-speech conversations using six preset voices, potentially making the development process for things like language apps and education and customer support software much easier. It's built on the same version of GPT-4 that powers Advanced Voice Mode in ChatGPT. STS 3D has not provided any information about how he was able to integrate Realtime API with the gun, and if he subverted OpenAI's controls around this type of activity. In January 2024, OpenAI removed language within its terms of service that prohibited the use of its technology for "military and warfare." Until Jan. 10, the company's usage policy banned "activity that has high risk of physical harm, including," specifically, "weapons development" and "military and warfare." Instead, the updated policy now prohibits using "our service to harm yourself or others" as well as using the technology to "develop or use weapons," but not "military and warfare." We've seen previous instances where openly available AI models from mainstream tech firms have potentially been leveraged for real-world military applications. In November 2024, Reuters reported that institutions tied to the Chinese military may have used Meta's Llama AI model to gather and process research data. Though Meta's terms and conditions prohibit the model from being used for "military, warfare, nuclear industries or applications" or espionage, these restrictions can be hard to enforce outside of the US.
[2]
Viral ChatGPT-powered sentry gun gets shut down by OpenAI
OpenAI says it has cut off API access to an engineer whose video of a motorized sentry gun controlled by ChatGPT-powered commands has set off a viral firestorm of concerns about AI-powered weapons. An engineer going by the handle sts_3d started posting videos of a motorized, auto-rotating swivel chair project back in August. By November, that same assembly appeared to seamlessly morph into the basis for a sentry gun that could quickly rotate to arbitrary angles and activate a servo to fire precisely aimed projectiles (though only blanks and simulated lasers are shown being fired in his videos). Earlier this week, though, sts_3d started getting wider attention for a new video showing the sentry gun's integration with OpenAI's real-time API. In the video, the gun uses that ChatGPT integration to aim and fire based on spoken commands from sts_3d and even responds in a chirpy voice afterward. "If you need any other assistance, please let me know," the ChatGPT-powered gun says after firing a volley at one point. "Good job, you saved us," sts_3d responds, deadpan. "I'm glad I could help!" ChatGPT intones happily. In response to a comment request from Futurism, OpenAI said it had "proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry. OpenAI's Usage Policies prohibit the use of our services to develop or use weapons or to automate certain systems that can affect personal safety." The "voice-powered killer AI robot angle" has garnered plenty of viral attention for sts_3d's project in recent days. But the ChatGPT integration shown in his video doesn't exactly reach Terminator levels of a terrifying killing machine. Here, ChatGPT instead ends up looking more like a fancy, overwrought voice-activated remote control for a legitimately impressive gun mount.
Share
Share
Copy Link
OpenAI has cut off API access to an engineer who created a voice-controlled sentry gun using the company's Realtime API, citing a violation of their usage policies prohibiting the development of weapons.
In a recent development that has raised concerns about the potential misuse of AI technology, OpenAI has terminated API access for an engineer who created a voice-controlled sentry gun using the company's Realtime API. The project, which went viral on social media, has ignited discussions about the ethical implications of AI-powered weapons and the responsibilities of AI companies 1.
The engineer, known online as STS 3D, posted a video demonstrating a sentry gun that could respond to voice commands using OpenAI's ChatGPT technology. In the video, STS 3D is heard saying, "ChatGPT, we're under attack from the front left and front right. Respond accordingly." The gun then appears to fire blanks in the specified directions 1.
The project began as a motorized, auto-rotating swivel chair in August, which later evolved into a sentry gun capable of rotating to arbitrary angles and firing projectiles. While only blanks and simulated lasers were shown being fired in the videos, the potential for more dangerous applications was clear 2.
Upon learning of the project, OpenAI took swift action. A spokesperson for the company stated, "We proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry" 1.
OpenAI's usage policies explicitly prohibit the use of their services to develop or use weapons, or to automate systems that can affect personal safety. The company emphasized that this project was in clear violation of these policies 2.
This incident has reignited discussions about the potential misuse of AI technology and the responsibilities of companies developing these powerful tools. It's worth noting that in January 2024, OpenAI had removed language from its terms of service that specifically prohibited the use of its technology for "military and warfare." However, the current policy still bans the use of their service to harm others or develop weapons 1.
While the ChatGPT integration in this particular project may not have reached advanced levels of autonomy, the incident highlights the ongoing challenges in regulating AI technology. It also underscores the difficulty of enforcing usage policies, especially on a global scale. For instance, there have been reports of Chinese military-linked institutions potentially using Meta's Llama AI model for research, despite similar restrictions 1.
As AI technology continues to advance, the incident serves as a reminder of the need for ongoing discussions about ethical AI development and use, as well as the importance of robust policies and enforcement mechanisms to prevent potential misuse.
Reference
[2]
OpenAI has banned multiple accounts for misusing ChatGPT in surveillance and influence campaigns, highlighting the ongoing challenge of preventing AI abuse while maintaining its benefits for legitimate users.
15 Sources
15 Sources
OpenAI, the creator of ChatGPT, has entered into a partnership with defense technology company Anduril Industries to develop AI solutions for military applications, raising concerns among employees and industry observers about the ethical implications of AI in warfare.
29 Sources
29 Sources
OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.
15 Sources
15 Sources
OpenAI's ChatGPT introduces an advanced voice mode, sparking excitement and raising privacy concerns. The AI's ability to mimic voices and form emotional bonds with users has led to mixed reactions from experts and users alike.
5 Sources
5 Sources
Las Vegas police reveal that ChatGPT was used to plan the explosion of a Tesla Cybertruck outside Trump International Hotel, marking the first known case of AI being used to orchestrate an attack on US soil.
19 Sources
19 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved