OpenAI Terminates API Access for Engineer's ChatGPT-Powered Sentry Gun Project

2 Sources

Share

OpenAI has cut off API access to an engineer who created a voice-controlled sentry gun using the company's Realtime API, citing a violation of their usage policies prohibiting the development of weapons.

News article

Engineer's ChatGPT-Powered Sentry Gun Sparks Controversy

In a recent development that has raised concerns about the potential misuse of AI technology, OpenAI has terminated API access for an engineer who created a voice-controlled sentry gun using the company's Realtime API. The project, which went viral on social media, has ignited discussions about the ethical implications of AI-powered weapons and the responsibilities of AI companies

1

.

The Controversial Project

The engineer, known online as STS 3D, posted a video demonstrating a sentry gun that could respond to voice commands using OpenAI's ChatGPT technology. In the video, STS 3D is heard saying, "ChatGPT, we're under attack from the front left and front right. Respond accordingly." The gun then appears to fire blanks in the specified directions

1

.

The project began as a motorized, auto-rotating swivel chair in August, which later evolved into a sentry gun capable of rotating to arbitrary angles and firing projectiles. While only blanks and simulated lasers were shown being fired in the videos, the potential for more dangerous applications was clear

2

.

OpenAI's Response

Upon learning of the project, OpenAI took swift action. A spokesperson for the company stated, "We proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry"

1

.

OpenAI's usage policies explicitly prohibit the use of their services to develop or use weapons, or to automate systems that can affect personal safety. The company emphasized that this project was in clear violation of these policies

2

.

Ethical Concerns and Policy Changes

This incident has reignited discussions about the potential misuse of AI technology and the responsibilities of companies developing these powerful tools. It's worth noting that in January 2024, OpenAI had removed language from its terms of service that specifically prohibited the use of its technology for "military and warfare." However, the current policy still bans the use of their service to harm others or develop weapons

1

.

Broader Implications

While the ChatGPT integration in this particular project may not have reached advanced levels of autonomy, the incident highlights the ongoing challenges in regulating AI technology. It also underscores the difficulty of enforcing usage policies, especially on a global scale. For instance, there have been reports of Chinese military-linked institutions potentially using Meta's Llama AI model for research, despite similar restrictions

1

.

As AI technology continues to advance, the incident serves as a reminder of the need for ongoing discussions about ethical AI development and use, as well as the importance of robust policies and enforcement mechanisms to prevent potential misuse.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo