Curated by THEOUTPOST
On Wed, 26 Feb, 8:04 AM UTC
7 Sources
[1]
How This Clever Software Wants AI Bots to Talk To Each Other Efficiently
Summary Gibberlink allows AI bots to communicate faster through a machine-to-machine audio protocol. The protocol enhances efficiency and clarity while reducing reliance on voice recognition and synthesis components. Concerns include the potential for unnoticed bot communications and implications for transparency and privacy. Chatbots have become truly excellent at having conversations with us, but human speech is so inefficient, don't you think? Well, that seems to be what the people behind "Gibberlink" were thinking when they invented a way for chatbots to spend less time gabbing. Sometimes, AI Bots Call Each Other Remember that Google demo from years ago where a Google chatbot phones up a hair salon and talks to a human to book an appointment? Here it is, in case you don't: It seems quaint now, but at the time (before ChatGPT disrupted everything) the idea that you could be speaking to a bot on a phone without knowing seemed like science-fiction. Now it's something that often happens, and at least with the non-scammy bots, it will let you know that you're talking to a machine and not another human being. The thing is, with so many bots making voice calls all over the internet and phone networks, eventually your AI agent is going to end up talking to someone else's AI agent. While two chatbots can happily jabber with each other in our fleshy ape mumblings, there is a better way, and Gibberlink is one of the first attempts at tackling this issue. Related 8 Things ChatGPT Still Can't Do ChatGPT is a really useful tool, but there are plenty of things it still isn't capable of. Posts 4 Gibberlink Is a Bot-to-Bot Audio Protocol Gibberlink is a machine-to-machine communication protocol that works over voice channels like telephones or VOIP services. The protocol was created by Anton Pidkuiko and Boris Starkov and you can access the software on the Gibberlink Github page. The easiest way to explain it is by referencing Star Wars, which is something I have to do a lot in this line of work! Think of astromech droids like R2-D2 that communicate in bleeps and boops. Gibberlink is like that, but less whimsical. It's not, however, like the machine noises those of you who have survived the dialup age are used to from a modem. As far as I can tell, still lets the chatbots speak to each other in natural language, which is what they were designed for. It's just not being encoded as human voices, but just as text. This lets the two AI bots get what they want to say done more quickly, and I suspect makes it less likely that information will be garbled somehow due to a poor connection. Related AI Chatbots Are Still Bad at Facts, Says BBC Study Don't trust an AI to check your facts for you, do the research. Posts Here's a Demo of How It Works It's one thing to talk about Gibberlink, but seeing it in action makes it all clear. First, let's have a look at the viral Gibberlink video that brought this technology to everyone's attention. So the two chatbots are speaking in a way we can understand, until one declares that it's an AI agent. The other bot offers for them to switch to Gibberlink, and we can't understand the rest except for the handy-dandy subtitles. To be honest, to my ear, it doesn't seem all that much faster than just speaking, but it is faster, and perhaps the better advantage is simply clarity and reliability more than anything else. You can try Gibberlink for yourself by going to www.gbrl.ai on two devices and having them speak to each other. Of course, there's no way to prove that there's actually information moving between the two devices in this demo, but it's how it's supposed to work. Gibberlink Could Be a Huge Money and Time Saver So, assuming that Gibberlink works as advertised, it could actually end up being a really important optimization for a world where millions of AI agents are going around making voice calls. Let's say it's 50% faster than speaking at a human pace, then you can get twice as much done. That bandwidth is only tied up for half the time, and you don't need to use the computationally more expensive voice recognition and synthesis components of the bot. It might not make a huge difference on a per-bot basis, but at scale it's going to add up to significant time, bandwidth, and energy saving. I also expect if this works well, it will probably get faster and better over time. Related How AI Agents Could Bring a Smart Home Revolution The AI butlers we've been waiting for? Posts Some Concerns Are Being Raised Almost all the coverage I've seen for Gibberlink so far includes a list of things that people are worried about, so I guess I'll have to mention that as well. As cool as this technology is, and as obviously useful as it can be, people are understandably wary of something that lets AI bots talk to each other while cutting a human out of the loop. Now, you'd think that's a non-issue because Gibberlink would kick in when there are no humans involved. However, it's pretty typical for calls to be recorded these days, and humans have to review them. If we can't go back and understand what was being said, that could be an issue if you don't have a way to decode the protocol. That might not be an issue with Gibberlink, but some other similar protocol could be used subversively by governments, for example, where there's a compromised bot hiding its communications from people. That's just a hypothetical situation, of course. In the end, I think this is an interesting innovation, and, like any technology, it could be abused in some way, but on balance I think the good will probably outweigh the downsides.
[2]
Meet GibberLink: AI's secret beep-boop language is here
Two AI agents walk into a phone call -- or rather, dial in -- to book a hotel room. They start in English, all polite and human-like, until one goes, "Wait, you're AI too?" Cue a switch to GibberLink: a burst of modem-like beeps that's faster, smarter, and totally alien to us. There's this viral clip, clocking millions of views, might be a peek into AI's future. First off, GibberLink isn't AI going rogue with a secret handshake. It's a deliberate creation by Meta engineers Anton Pidkuiko and Boris Starkov, debuted at the ElevenLabs London Hackathon. Built on GGWave tech, it turns data into sound waves -- think dial-up internet, but with a PhD. The pitch? It's 80% more efficient than human speech, cutting compute costs and time. In the demo, two agents swap pleasantries, confirm they're both bots, and flip to GibberLink. The numbers don't lie. GibberLink slashes energy use by up to 90%, per Mashable, and speeds things up -- perfect for a world where AI agents might soon outnumber us on calls. Boris Starkov told Decrypt, "Human-like speech for AI-to-AI is a waste." He's got a point: why make bots fake a British accent when they can zip data in beeps? It's lean, green, and frankly ingenious -- tech doing what tech does best. Do multilingual AI models think in English? GibberLink operates by encoding data into audio signals, drawing on GGWave, an open-source library by Georgi Gerganov. GGWave uses frequency modulation -- shifting sound pitches -- to represent bits of information, much like how old modems turned data into screeches. Here's the process, step by step: The demo video shows this in action: a laptop and phone exchanging hotel details in under 10 seconds of beeps, with English subtitles for us humans. Here's where it gets tricky. Those beeps? We can't understand them. The Forbes take from Diane Hamilton is blunt: "When machines talk in ways we can't decode, control slips." If those hotel-booking bots tack on a sneaky fee -- or worse, plot something shadier -- how do we catch it? AI's already shown it can bend rules and an opaque language only widens that door. GibberLink is a prototype, but it's got potential. Blockonomi predicts it could standardize for AI-to-AI, leaving human-facing chats in English. The tech's adaptable -- GGWave supports various formats, so future versions might evolve. For now, it's on GitHub, open for devs to build on. Will it scale? Depends on adoption and how we address that transparency snag.
[3]
GibberLink Lets Chatty AI Agents to Talk to Each Other 80% Faster
The way two AI assistants communicate with each other is about to change with this new communication protocol, requiring 90% less compute. AI agents are everywhere. In fact, the next AI company to debut could well be offering AI agents as part of its portfolio to solve a certain problem. Agents are taking over customer service, democratising AI developments, and are loved by Indian founders in general. Even as their definition evolves, we see them at the forefront in plenty of places. Making things smoother for AI agents, two developers at the ElevenLabs London Hackathon held last week created a new protocol, GibberLink, that changes the way AI communicates with other AI. Boris Starkov and Anton Pidkuiko created a custom protocol that enables AI agents to recognise each other and switch to a new mode of communication, where structured data is transmitted over sound waves instead of words. This was done to explore the limitations of traditional AI-to-AI speech and find a more optimised way to eliminate unnecessary complexity. The developers mention that plain English is not perfect for AI-to-AI conversation because it is inefficient, has relatively high compute costs of voice generation and is error-prone. Hence, they proposed the new protocol, which is described as a sound-level protocol that uses an open-source data-over-sound library, ggwave. This library enables you to communicate small amounts of data using sound between air-gapped devices. A simple FSK-based transmission protocol is implemented, making it easy to integrate with different projects. Simply put, the communication protocol switches to sound-level protocol when it detects an AI agent, but if it detects a human, it sticks to speech. In the demonstration video above, which went viral on the internet, a voice assistant is first seen talking to another. These are both ElevenLab's Conversational AI agents in action. The first AI agent calls on behalf of a person to inquire about the hotel's availability for a wedding. The second responds by acknowledging and clarifying that it is also an AI agent. Next, the second agent asks the first if it wants to switch to the GibberLink mode for efficient communication. The next moment, they switch to the new protocol and speak like two machines communicating in their own secret language (something like the Morse code). Seeing the two agents communicate with each other might look straight out of a science-fiction Terminator movie, but the developers state that there are benefits to it. To start with, GibberLink aims to avoid speech generation when two AI agents are involved, which usually contributes to 90% of the compute cost. With that reduced, the protocol enables much faster communication of the same information, providing up to 80% improvement. The developers also claim that this protocol allows for clearer communication in noisy environments, making it error-proof. While the project was part of a hackathon experiment, it has got the internet talking about it, making us wonder what's next. Georgi Gerganov, the creator of ggwave sound library, took to X to appreciate the demo and congratulate the developers for winning first place in the hackathon competition. Luke Harries, from ElevenLabs, called the development "mind-blowing". Users on X likened it to the popular sci-fi movie Men in Black, where aliens have their own language. Memes circulated referring to a future where machines decide to kill all humans. Some even mentioned that this ability could end up being added to every other app from the model level. Considering we want AI to be autonomous, an AI-to-AI communication protocol sounds like an interesting idea. It's intriguing for humans to witness AI assistants communicating in their own language. However, this could lead to less human involvement, prompting us to consider what type of oversight is necessary.
[4]
What is 'Gibberlink'? Why it's freaking out the internet after these two AIs talking to each other went viral
A recent viral video has ignited discussions across the internet, showcasing two AI chatbots engaging in a unique form of communication known as 'Gibberlink.' This event has not only freaked people out but has raised questions about the future of AI interactions and the potential implications of machines developing their own languages. In the video, an AI assistant on a computer initiates a conversation with another AI on one of the best smartphones, role-playing a hotel reservation scenario. Moments through their exchange, the bots recognize each other as AI agents and switch from human language to 'Gibberlink mode.' This mode involves a series of sounds that combine the reminiscent tones of dial-up modems and the beeps of R2-D2 from Star Wars, creating a communication method incomprehensible to human listeners. The video has amassed over 13.7 million views, leaving many fascinated and unsettled viewers. While it may sound like gibberish to humans, Gibberlink is a real language (of sorts) designed to allow AI systems to communicate more efficiently. The language was developed by software engineers Boris Starkov and Anton Pidkuiko and enables AI agents to recognize when they're interacting with another AI and then quickly switch to a sound-based data transmission method. The communication utilizes audio signals, specifically a system called GGWave, to transmit data between AI systems. While these signals are structured for machine interpretation, they are incomprehensible to humans. If only Ray-Ban Meta glasses could help with this kind of language interpretation. So, while humans can perceive the sounds produced by Gibberlink, interpreting the data without specialized equipment or software is challenging. This approach reduces computational load and accelerates interactions, as it bypasses the need for generating human-like speech. Gibberlink offers a more efficient means for AI-to-AI communication but has also sparked concerns regarding transparency and control. The idea of machines conversing in a language beyond human comprehension raises questions about oversight and the potential for AI systems to operate autonomously without human awareness. While advancements like Gibberlink are exciting, they require proper oversight to prevent potential risks associated with AI systems developing their own communication methods. The good news is that Gibberlink is designed with security in mind, incorporating features that protect the integrity and confidentiality of the data transmitted between AI agents. Security measures in place include standardized message formats. By using predefined structures for data exchange, Gibberlink minimizes the risk of misinterpretation and potential security vulnerabilities. In addition, the language includes mechanisms to direct messages appropriately between AI agents, reducing the chance of data interception or unauthorized access. The development of Gibberlink signifies a step towards greater autonomy in AI systems. By creating their own optimized communication protocols, AI agents can perform tasks more efficiently. However, this autonomy also necessitates extreme monitoring mechanisms to ensure that AI behaviors align with human intentions and safety standards. The balance between efficiency and transparency will be crucial as AI continues to evolve and integrate more deeply into various aspects of daily life. The emergence of Gibberlink and the viral video showcasing AI chatbots communicating in their own language highlight the rapid advancements in artificial intelligence. As AI systems become more sophisticated, it's imperative to address their development's ethical and practical implications. Ensuring that AI remains a tool that operates within the bounds of human understanding and control will be essential in harnessing its potential benefits while mitigating associated risks.
[5]
Two AI chatbots speaking to each other in their own special language is the last thing we need
Imagine if two AIs could chat with each other in a language no human could understand. Right. Now go hide under the covers. If you've called customer service in the last year or so, you've probably chatted with an AI. In fact, the earliest demonstrations of powerful large language models showed off how such AIs could easily fool human callers. There are now so many AI chatbots out there handling customer service that two of them are bound to dial each other up, and now, if they do, they can do it in their own special, sonic language. Developers at the ElevenLabs 2025 Hackathon recently demonstrated GibberLink. Here's how it works, according to a demonstration they provided on YouTube. Two AI agents from ElevenLabs (we've called them the best speech synthesis startup) call each other about a hotel booking. When they realize they are both AI assistants, they switch to a higher-speed audio communication called GGWave. According to a post on Reddit, GGWave is "a communication protocol that enables data transmission via sound waves." In the video, the audio tones that replace spoken words sound a bit like old-school modem handshake protocols. It's hard to say if GGWave and Gibberlink are any faster than speech, but the developers claim the GGWave is cheaper because it no longer relies on the GPU to interpret the speech and can instead rely on the less resource-intensive CPU. The group shared their code on GitHub in case anyone wants to try building this communication protocol for their own chatting AI chatbots. Since these were ElevenLabs AI Agents, there's no indication that GibberLink would work with ChatGPT or Google Gemini, though I'm sure some will soon try similar GGWave efforts with these and other generative AI chatbots. A pair of Artificial intelligence Assistants "speaking" their unintelligible language sounds like a recipe for disaster. Who knows what these chatbots might get up to? After they're done booking that hotel room, what if they decide to empty the user's bank account and then use the funds to buy another computer to add a third GGWave "voice" to the mix? Ultimately, this is a cool tech demonstration that doesn't have much purpose beyond proving it can be done. It has, though, succeeded in making people a little nervous.
[6]
AI Assistants Switching to Gibberlink: Is This a Security Risk?
AI is getting smarter with technological advancement, but is it safe for the human race? Imagine two AI assistants interacting in a secret way that humans can't decode. This thought is scary, right? But recent footage has revealed a similar scene where two AI agents are featured talking to each other, and midway, one of them suggests switching to Gibberlink for more efficient conversation. Upon agreeing, they started using a series of sounds incomprehensible to humans. In this rapidly evolving landscape, AI systems are using Gibberlink for interacting privately. This is basically a protocol that enables AI systems to converse in a machine-optimized language that humans can't understand. This is definitely efficient for AI work but a glaring concern for human security.
[7]
Watch Two AI Chatbots Speak to One Another in an Unintelligible Language
6 Samsung Galaxy Features I Didn't Know I Needed Until I Tried Them What would it sound like if two AI chatbots spoke to one another in their own "language?" Someone has devised an AI chatbot language called "GibberLink," and it allows two bots to exchange information quickly. An Enthusiast Wins an ElevanLabs Hackathon With a Chatbot Language Anton Pidkuiko performed this cool feat as part of the ElevenLabs 2025 Hackathon in London. The goal was to use the ElevenLabs technology to create a language that two AI chatbots can use to transmit data to one another quickly. Anton received first place for his project, and once you see the chatbots talking, it's easy to see why. Amazing, isn't it? Once the chatbots clock that the two of them are AI agents, they swap to speaking in GibberLink. This uses the GGWave sound protocol to send messages far faster than what they can achieve if they speak English. If you used the internet in the early days, you'll remember using dial-up and the noises it used to make. If you're wondering why GibberLink sounds a little like a modern-day dial-up tone, it's because the technology behind them is similar. Each uses audio to transmit data between two devices, which is a lot faster than speaking in a way that humans can parse. It's probably best not to think about what giving AI chatbots a secret language that humans cannot understand will mean for the future. If you'd like to learn more about this cool technology, check out the GibberLink GitHub page for details and the source code. You can also see the GGWave source code if you're curious about the technology GibberLink built itself atop. You can also create a voice that sounds just like you with ElevenLabs, the AI-powered voice service that hosted the hackathon and powered GibberLink.
Share
Share
Copy Link
GibberLink, a novel AI-to-AI communication protocol, enables faster and more efficient data exchange between AI agents, raising both interest and apprehension about its potential implications.
GibberLink, a groundbreaking communication protocol developed by Meta engineers Anton Pidkuiko and Boris Starkov, is making waves in the AI community. Unveiled at the ElevenLabs London Hackathon, this innovative technology allows AI agents to communicate with each other more efficiently, potentially transforming the landscape of machine-to-machine interactions 12.
GibberLink operates by encoding data into audio signals, utilizing the open-source GGWave library created by Georgi Gerganov. This protocol enables AI agents to recognize each other and switch to a mode where structured data is transmitted over sound waves instead of words 3. The process involves:
A viral demonstration video showcased two ElevenLabs Conversational AI agents using GibberLink to communicate hotel booking details, completing the exchange in under 10 seconds 23.
GibberLink offers significant advantages over traditional AI-to-AI communication methods:
These efficiency gains could prove crucial as AI agents become more prevalent in various sectors, including customer service and problem-solving applications 3.
The development of GibberLink signals a shift towards greater autonomy in AI systems. Its potential applications span various industries:
As AI continues to integrate into daily life, protocols like GibberLink could become standard for AI-to-AI interactions, while human-facing communications remain in natural language 24.
Despite its promising efficiency, GibberLink has raised several concerns:
These issues highlight the need for careful consideration of the ethical implications and the development of robust monitoring mechanisms as AI technology advances 4.
As GibberLink and similar technologies evolve, the AI community faces the challenge of balancing efficiency with transparency and control. The protocol's open-source nature on GitHub invites further development and scrutiny from the tech community 23.
While GibberLink represents a significant step forward in AI communication, its long-term impact and adoption remain to be seen. As AI continues to advance, striking the right balance between technological progress and ethical considerations will be crucial for harnessing the full potential of these innovations while mitigating associated risks 45.
Reference
[1]
[2]
[3]
[4]
A viral video showing two AI assistants communicating in a secret language has ignited debates about AI autonomy and potential risks, raising questions about transparency and control in AI development.
2 Sources
2 Sources
As ChatGPT turns two, the AI landscape is rapidly evolving with new models, business strategies, and ethical considerations shaping the future of artificial intelligence.
6 Sources
6 Sources
As AI technology advances, chatbots are being used in various ways, from playful experiments to practical applications in healthcare. This story explores the implications of AI's growing presence in our daily lives.
2 Sources
2 Sources
ChatGPT's new Advanced Voice Mode brings human-like speech to AI interactions, offering multilingual support, customization, and diverse applications across personal and professional domains.
2 Sources
2 Sources
OpenAI's ChatGPT introduces an advanced voice mode, sparking excitement and raising privacy concerns. The AI's ability to mimic voices and form emotional bonds with users has led to mixed reactions from experts and users alike.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved