2 Sources
[1]
OpenAI Cuts Off Engineer Who Used API to Create Voice-Controlled Gun
(Credit: Thomas Fuller/SOPA Images/LightRocket via Getty Images) OpenAI has cut off API access to an engineer who allegedly used the company's Realtime API to create a sentry machine gun that could respond to voice queries. The man, who calls himself STS 3D online, posted a viral video where he tells the robot: "ChatGPT, we're under attack from the front left and front right. Respond accordingly." The robot appears to then shoot blanks from a rifle to its left and right, before saying: "If you need any further assistance, just let me know." One user on the social network Reddit commented: "There's at least 3 movies explaining why this is a bad idea." An OpenAI spokesperson told Futurism, who first reported the news, that "we proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry." "OpenAI's Usage Policies prohibit the use of our services to develop or use weapons, or to automate certain systems that can affect personal safety," the company added. Rolled out in October 2024 in public beta, OpenAI's Realtime API allows developers to create natural speech-to-speech conversations using six preset voices, potentially making the development process for things like language apps and education and customer support software much easier. It's built on the same version of GPT-4 that powers Advanced Voice Mode in ChatGPT. STS 3D has not provided any information about how he was able to integrate Realtime API with the gun, and if he subverted OpenAI's controls around this type of activity. In January 2024, OpenAI removed language within its terms of service that prohibited the use of its technology for "military and warfare." Until Jan. 10, the company's usage policy banned "activity that has high risk of physical harm, including," specifically, "weapons development" and "military and warfare." Instead, the updated policy now prohibits using "our service to harm yourself or others" as well as using the technology to "develop or use weapons," but not "military and warfare." We've seen previous instances where openly available AI models from mainstream tech firms have potentially been leveraged for real-world military applications. In November 2024, Reuters reported that institutions tied to the Chinese military may have used Meta's Llama AI model to gather and process research data. Though Meta's terms and conditions prohibit the model from being used for "military, warfare, nuclear industries or applications" or espionage, these restrictions can be hard to enforce outside of the US.
[2]
Viral ChatGPT-powered sentry gun gets shut down by OpenAI
OpenAI says it has cut off API access to an engineer whose video of a motorized sentry gun controlled by ChatGPT-powered commands has set off a viral firestorm of concerns about AI-powered weapons. An engineer going by the handle sts_3d started posting videos of a motorized, auto-rotating swivel chair project back in August. By November, that same assembly appeared to seamlessly morph into the basis for a sentry gun that could quickly rotate to arbitrary angles and activate a servo to fire precisely aimed projectiles (though only blanks and simulated lasers are shown being fired in his videos). Earlier this week, though, sts_3d started getting wider attention for a new video showing the sentry gun's integration with OpenAI's real-time API. In the video, the gun uses that ChatGPT integration to aim and fire based on spoken commands from sts_3d and even responds in a chirpy voice afterward. "If you need any other assistance, please let me know," the ChatGPT-powered gun says after firing a volley at one point. "Good job, you saved us," sts_3d responds, deadpan. "I'm glad I could help!" ChatGPT intones happily. In response to a comment request from Futurism, OpenAI said it had "proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry. OpenAI's Usage Policies prohibit the use of our services to develop or use weapons or to automate certain systems that can affect personal safety." The "voice-powered killer AI robot angle" has garnered plenty of viral attention for sts_3d's project in recent days. But the ChatGPT integration shown in his video doesn't exactly reach Terminator levels of a terrifying killing machine. Here, ChatGPT instead ends up looking more like a fancy, overwrought voice-activated remote control for a legitimately impressive gun mount.
Share
Copy Link
OpenAI has cut off API access to an engineer who created a voice-controlled sentry gun using the company's Realtime API, citing a violation of their usage policies prohibiting the development of weapons.
In a recent development that has raised concerns about the potential misuse of AI technology, OpenAI has terminated API access for an engineer who created a voice-controlled sentry gun using the company's Realtime API. The project, which went viral on social media, has ignited discussions about the ethical implications of AI-powered weapons and the responsibilities of AI companies 1.
The engineer, known online as STS 3D, posted a video demonstrating a sentry gun that could respond to voice commands using OpenAI's ChatGPT technology. In the video, STS 3D is heard saying, "ChatGPT, we're under attack from the front left and front right. Respond accordingly." The gun then appears to fire blanks in the specified directions 1.
The project began as a motorized, auto-rotating swivel chair in August, which later evolved into a sentry gun capable of rotating to arbitrary angles and firing projectiles. While only blanks and simulated lasers were shown being fired in the videos, the potential for more dangerous applications was clear 2.
Upon learning of the project, OpenAI took swift action. A spokesperson for the company stated, "We proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry" 1.
OpenAI's usage policies explicitly prohibit the use of their services to develop or use weapons, or to automate systems that can affect personal safety. The company emphasized that this project was in clear violation of these policies 2.
This incident has reignited discussions about the potential misuse of AI technology and the responsibilities of companies developing these powerful tools. It's worth noting that in January 2024, OpenAI had removed language from its terms of service that specifically prohibited the use of its technology for "military and warfare." However, the current policy still bans the use of their service to harm others or develop weapons 1.
While the ChatGPT integration in this particular project may not have reached advanced levels of autonomy, the incident highlights the ongoing challenges in regulating AI technology. It also underscores the difficulty of enforcing usage policies, especially on a global scale. For instance, there have been reports of Chinese military-linked institutions potentially using Meta's Llama AI model for research, despite similar restrictions 1.
As AI technology continues to advance, the incident serves as a reminder of the need for ongoing discussions about ethical AI development and use, as well as the importance of robust policies and enforcement mechanisms to prevent potential misuse.
Summarized by
Navi
[2]
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
20 hrs ago
3 Sources
Technology
20 hrs ago
The UK's technology secretary and OpenAI's CEO discussed a potential multibillion-pound deal to provide ChatGPT Plus access to all UK residents, highlighting the government's growing interest in AI technology.
2 Sources
Technology
4 hrs ago
2 Sources
Technology
4 hrs ago
Multiple news outlets, including Wired and Business Insider, have been duped by AI-generated articles submitted under a fake freelancer's name, raising concerns about the future of journalism in the age of artificial intelligence.
4 Sources
Technology
2 days ago
4 Sources
Technology
2 days ago
Google inadvertently revealed a new smart speaker during its Pixel event, sparking speculation about its features and capabilities. The device is expected to be powered by Gemini AI and could mark a significant upgrade in Google's smart home offerings.
5 Sources
Technology
1 day ago
5 Sources
Technology
1 day ago
As AI and new platforms transform search behavior, brands must adapt their strategies beyond traditional SEO to remain visible in an increasingly fragmented digital landscape.
2 Sources
Technology
1 day ago
2 Sources
Technology
1 day ago