Curated by THEOUTPOST
On Tue, 24 Sept, 12:04 AM UTC
12 Sources
[1]
ChatGPT's New Voice Assistant Is Here to Creep You Out
OpenAI's much-anticipated ChatGPT voice assistant is rolling out this week to all paying subscribers, and like a lot of features regarding AI, it's a little creepy in action. Advanced Voice Mode (AVM) began making its way to users who pay for ChatGPT Plus this week, according to OpenAI. The company tweeted a video of the feature in action as it helps someone craft an apology to their grandmother for being late. The user switches gears and tells the voice assistant to make the apology in Mandarin, which it did on the fly. The company says the AVM can offer the apology in more than 50 different languages. OpenAI first revealed its AVM back in May, and it caused quite a controversy as the voice sounded very similar to Scarlett Johansson. This led the actor to threaten legal action as she had warned the company to not use her voice after she was approached by company CEO Sam Altman with an offer to be the voice of the feature last year. The voice used in the demo still does have a hint of Johansson to it. AVM was officially launched in July, but it was only made available to a select number of ChatGPT Plus subscribers. Tuesday's announcement is the start of a wider launch to subscribers and Team users this week while Enterprise and Edu users will see the feature roll out to them next week. You'll know if you have AVM as a notification will appear on the app. There are five new voices for AVM: Arbor, Maple, Sol, Spruce, and Vale. This brings a total number of nine voices to try out. Along with the new voices, AVM can be programmed in a way. There are Custom Instructions in the Settings option to let you choose how you want the model to speak. This can include having it speak clearly and slowly, or having it address you with a certain name. It can even act as an interviewer if you want to practice for an interview. That said, expect some people to customize their AVMs in ways you wouldn't want to think about. OpenAI also says AVM has a variety of improvements from its conversational speed, how smoothly it talks, and its accents. It can also adapt to the tone of the conversation in order to sound like another person. Those who want to have a real chat with ChatGPT will have to pay for the Plus subscription which starts at $20 a month.
[2]
OpenAI's voice mode rolling out to all ChatGPT subscribers - here's who gets it first
If you have been considering getting a ChatGPT Plus membership, now may be the time. OpenAI's Advanced Voice Mode, one of the most highly anticipated features of OpenAI's Spring Launch event, is now finally out of alpha and available to all ChatGPT Plus and Team users. Also: The best AI chatbots of 2024: ChatGPT, Copilot, and worthy alternatives On Tuesday, OpenAI announced that it's started to roll out its Advanced Voice Mode to ChatGPT Plus and Team users, offering them a smarter voice assistant that can be interrupted and respond to their emotions. The rollout will also feature five new voices -- Arbor, Maple, Sol, Spruce, and Vale -- available in both Standard and Advanced Voice Mode. OpenAI shares that the rollout to ChatGPT Plus and Team users will be gradual, with rollout out to Enterprise and Edu tiers coming next week. Users will know if they have been given access from a pop-up message next to the Voice Mode option within the ChatGPT interface. Since the release of the alpha in July, OpenAI has applied those to improve the Advanced Voice Mode, giving it improved accents in foreign languages and better conversation speed and smoothness. The Advanced Voice Mode also has a different look, now represented by an animated blue sphere. To make the experience even more tailored to the user, the Advanced Voice Mode can now use Custom Instructions and Memory, which allow it to consider specific user-shared or designated criteria when producing a response. As with the Alpha, users will not be able to access Voice Mode's multimodal capabilities, including assisting with content on users' screens and using the user's phone camera as context for a response, as seen in the demo video below. OpenAI tested the voice capabilities with 100+ external red teamers across 45 languages to ensure the model's safety. In August, the startup published its GPT-4o System Card, a thorough report delineating the LLM's safety based on risk evaluations according to OpenAI's Preparedness Framework, external red-teaming, and more, including the Advanced Voice Mode. Also: Gemini Live is finally hitting Android phones - how to access it for free You can become a ChatGPT Plus subscriber for $20 per month. Other membership perks include advanced data analysis features, unlimited image generation, five times more messages for GPT-4o, and the ability to create custom GPTs. One week after OpenAI unveiled this feature in May, Google unveiled a similar feature called Gemini Live. This feature is also a conversational voice assistant supported by LLMs to improve the understanding and flow of a conversation. Earlier this month, Google made Gemini Live available for free to all Android users, so if you have an Android and want to experience this type of assistant, you may not need to shell out the ChatGPT Plus subscription.
[3]
OpenAI finally brings humanlike ChatGPT Advanced Voice Mode to U.S. Plus, Team users
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Four months after it was initially shown off to the public, OpenAI is finally bringing its new humanlike conversational voice interface for ChatGPT -- "ChatGPT Advanced Voice Mode" to users beyond its initial small testing group and waitlist. All paying subscribers to OpenAI's ChatGPT Plus and Team plans will get access to the new ChatGPT Advanced Voice Mode, though the access is rolling out gradually over the next several days, according to OpenAI. It will be available in the U.S. to start. Next week, the company plans to make ChatGPT Advanced Voice Mode available to subscribers of its Edu and Enterprise plans. In addition, OpenAI is adding the ability to store "custom instructions" for the voice assistant and "memory" of the behaviors the user wants it to exhibit, similar to features rolled out earlier this year for the text version of ChatGPT. And it's shipping five new, different-styled voices today, too: Arbor, Maple, Sol, Spruce, and Vale -- joining the previous four available, Breeze, Juniper, Cove, and Ember, which users could talk to using ChatGPT's older, less advanced voice mode. This means ChatGPT users, individuals for Plus and small enterprise teams for Teams, can use the chatbot by speaking to it instead of typing a prompt. Users will know they've entered Advanced Voice Assistant via a popup when they access voice mode on the app. "Since the alpha, we've used learnings to improve accents in ChatGPT's most popular foreign languages, as well as overall conversational speed and smoothness," the company said. "You'll also notice a new design for Advanced Voice Mode with an animated blue sphere." Originally, voice mode had four voices (Breeze, Juniper, Cove and Ember) but the new update will bring five new voices called Arbor, Maple, Sol, Spruce and Vale. OpenAI did not provide a voice sample for the new voices. These updates are only available on the GPT-4o model, not the recently released preview model, o1. ChatGPT users can also utilize custom instructions and memory to ensure voice mode is personalized and responds based on their preferences for all conversations. AI voice chat race Ever since the rise of AI voice assistants like Apple's Siri and Amazon's Alexa, developers have wanted to make the generative AI chat experience more humanlike. ChatGPT has had voices built into it even before the launch of voice mode, with its Read-Aloud function. However, the idea of Advanced Voice Mode is to give users a more human-like conversation experience, a concept other AI developers want to emulate as well. Hume AI, a startup by former Google Deepminder Alan Cowen, released the second version of its Empathic Voice Interface, a humanlike voice assistant that senses emotion based on the pattern of someone's voice and can be used by developers through a proprietary API. OpenAI says it is making AI voices widely available to more users across its platforms, bringing the technology to the hands of so many more people than those other firms. Comes following delays and controversy However, the idea of AI voices conversing in real-time and responding with the appropriate emotion hasn't always been received well. OpenAI's foray into adding voices into ChatGPT has been controversial at the onset. In its May event announcing GPT-4o and the voice mode, people noticed similarities of one of the voices, Sky, to that of the actress Scarlett Johanssen. It didn't help that OpenAI CEO Sam Altman posted the word "her" on social media, a reference to the movie where Johansson voiced an AI assistant. The controversy sparked concerns around AI developers mimicking voices of well-known individuals. The company denied it referenced Johansson and insisted that it did not intend to hire actors whose voices sound similar to others. The company said users are limited only to the nine voices from OpenAI. It also said that it evaluated its safety before release. "We tested the model's voice capabilities with external red teamers, who collectively speak a total of 45 different languages, and represent 29 different geographies," the company said in an announcement to reporters. However, it delayed the launch of ChatGPT Advanced Voice Mode from its initial planned rollout date of late June to "late July or early August," and only then to a group of OpenAI-selected initial users such as University of Pennsylvania Wharton School of Business professor Ethan Mollick, citing the need to continue safety testing or "read teaming" the voice mode to avoid its use in potential fraud and wrongdoing. Clearly, the company thinks it has done enough to release the mode more broadly now -- and it is in keeping with OpenAI's generally more cautious approach of late, working hand-in-hand with the U.S. and U.K. governments and allowing them to preview new models such as its o1 series prior to launch.
[4]
OpenAI Rolls Out New Voice Assistant to All Paid ChatGPT Users
OpenAI is releasing a much-anticipated new voice assistant to all paid users of its chatbot ChatGPT, four months after the artificial intelligence company first unveiled the feature at a product launch event. The San Francisco-based startup said Tuesday it has started rolling out the option, known as advanced voice mode, to ChatGPT Plus subscribers and users of its ChatGPT Team service for businesses. The company said Enterprise and Edu paid users will begin getting access to the feature next week. OpenAI first teased the voice product in May, showing how it could quickly respond to written and visual prompts from users with a spoken voice. But the next month, OpenAI delayed launching the option to work through potential safety issues. In July, OpenAI rolled out the feature to a limited number of its ChatGPT Plus customers. After the delay, OpenAI said the product would not be able to impersonate how other people speak. The company also said that it had added new filters to ensure the software can spot and refuse some requests to generate music or other forms of copyrighted audio.
[5]
OpenAI is about to roll out ChatGPT Advanced Voice for Plus users
We get it -- some of you, just like our team here, haven't had the chance to try the ChatGPT Advanced Voice feature yet; but a recent leak confirms that OpenAI is about to roll it out for a select group of Plus users. So, soon you'll be able to compare those early demo videos with your own hands-on experience! OpenAI's Advanced Voice mode, first demoed in May, has been stirring up excitement. This feature lets you chat with ChatGPT on your phone in a natural, back-and-forth conversation, even giving you the power to cut it off if it starts to ramble. It also handles complex questions with ease, offering in-depth responses. A recent leak, reportedly from the ChatGPT team, suggests that the feature is being rolled out in a limited alpha to select users. According to the email, access to this alpha phase starting September 24, 2024, will be based on various factors, including participation invites and other testing criteria. In simpler terms: not everyone will get it just yet: Hi there, Thank you for reaching out and for your interest in the Advanced Voice mode! It's great to hear about your enthusiasm for our new features. As of now, access to Advanced Voice mode is being rolled out in a limited alpha to a select group of users. While being a long-time Plus user and having been selected for SearchGPT are both indicators of your active engagement with our platform, access to the Advanced Voice mode alpha on September 24, 2024, will depend on a variety of factors including but not limited to participation invitations and the specific criteria set for the alpha testing phase. Unfortunately, I don't have the ability to manually add users to the alpha testing list or provide specific insights into individual account access timelines. However, Plus users like yourself are among the first to receive access to new features, and we are planning for all Plus users to have access in the fall. Keep an eye on your email and app notifications, as any invitations or updates regarding access will be communicated through those channels. We truly appreciate your support and interest in being part of the early users for Advanced Voice mode. Your enthusiasm for our products helps us improve and expand our offerings. Best, OpenAI Team While OpenAI has promised that all Plus users will have access by the end of fall, this alpha rollout is a promising step toward the full release. Plus users, who pay $20 a month (or the equivalent in other regions), already get access to various LLMs, including the recently launched version 01-preview, which has impressed many with its improved math-solving and reasoning skills. OpenAI's voice feature has been in the spotlight for various reasons, including concerns about copyright and the recent rollout. Earlier this year, the company faced backlash over its "Sky" voice, which many users felt closely resembled Scarlett Johansson's. After feedback and Johansson's legal action, OpenAI decided to pull the voice, clarifying that Sky was voiced by a different actress. Now, Plus users are eagerly pressing OpenAI to accelerate the full rollout of Advanced Voice. Meanwhile, Apple Intelligence has yet to make its debut on iPhones, keeping iOS 18 users waiting for the anticipated AI-driven features. On the Android side, Google's Gemini AI is already making waves, with early access being rolled out, giving users a taste of advanced voice and assistant capabilities before much-anticipated AI updates arrive.
[6]
OpenAI rolls out Advanced Voice Mode with more voices and a new look
OpenAI announced it is rolling out Advanced Voice Mode (AVM) to an expanded set of ChatGPT's paying customers on Tuesday. The audio feature, which makes ChatGPT more natural to speak with, will initially roll out to customers in ChatGPT's Plus and Teams tiers. Enterprise and Edu customers will start receiving access next week. As part of the rollout, AVM is getting a revamped design. The feature is now represented by a blue animated sphere, instead of the animated black dots that OpenAI presented during its showcase of the technology in May. Users will receive a popup in the ChatGPT app, next to the voice icon, when AVM has been made available to them. ChatGPT is also getting five new voices that users can try out: Arbor, Maple, Sol, Spruce, and Vale. This brings ChatGPT's total number of voices to nine (almost as many as Google's Gemini Live), alongside Breeze, Juniper, Cove, and Ember. You might notice all of these names are inspired by nature, which could be because the whole point of AVM is to make using ChatGPT feel more natural. One voice missing from this lineup is Sky, the voice OpenAI showcased during its Spring Update, which led to a legal threat from Scarlett Johansson. The actress, who played an AI system in the feature film Her, claimed that Sky's voice sounded a little too similar to her own. OpenAI promptly took Sky's voice down, saying it never intended to resemble Johansson's voice, despite several staff members making references to the movie in tweets at the time. Another feature missing from this rollout: ChatGPT's video and screen sharing that OpenAI debuted during its Spring update four months ago. That feature is supposed to let GPT-4o simultaneously process visual and audible information. During the demo, an OpenAI staff member showed how you could ask ChatGPT real-time questions about math on a piece of paper in front of you, or code on your computer screen. At this time, OpenAI is not offering a timeline for when it will launch these multimodal capabilities. That said, OpenAI says it has made some improvements since releasing its limited alpha test of AVM. ChatGPT's voice feature is allegedly better at understanding accents now, and the company says its conversations are smoother and faster as well. During our tests with AVM, we found that glitches were not uncommon, but the company claims that's now improved. OpenAI is also expanding some of ChatGPT's customization features to AVM: Custom Instructions, which allows users to personalize how ChatGPT responds to them, and Memory, which allows ChatGPT to remember conversations to reference later on. An OpenAI spokesperson says AVM is not yet available in several regions, including the EU, the UK, Switzerland, Iceland, Norway, and Liechtenstein.
[7]
ChatGPT rolling out Advanced Voice Mode now -- here's what you need to know
The update ChatGPT Plus subscribers have been waiting for is finally here. OpenAI announced today that Advanced Voice Mode is available to ChatGPT users and Team tiers. This new feature promises conversations with a more natural and humanlike experience, enhancing user interactions. We knew this was coming, and this new advancement in Advanced Voice Mode marks a significant step in improving voice interactions for conversational AI. Advanced Voice Mode utilizes the new GPT-4o model, which combines text, vision, and audio processing for faster, more efficient responses. Unlike its predecessors, users can now experience real-time, emotionally responsive conversations, offering dynamic speech patterns and the AI can even handle interruptions with ease. This new advancement shows that OpenAI continues to pave the way for a smoother more fluid interaction as it leads the way for voice-based AI technology, though it has company from Gemini Live. ChatGPT Plus users can expect to experience enhanced personalization features, including customized instructions, and superior memory to make each interaction more personalized and tailored to the user. These additional features ensure that AI adapts to individual conversational preferences, making each session more intuitive and natural. As part of this new rollout, OpenAI has introduced five new voices in addition to the current Standard and Advanced Voice Mode versions. These new voice options give users control over how they interact with the AI. The update is currently exclusive to ChatGPT Plus and Team users but will soon extend further to Enterprise subscribers as well. Access will be available beginning next week to U.S. subscribers, but those in the EU, UK, Switzerland, Iceland and Norway will have to wait a bit longer until the features are available within their region. As part of ongoing improvements, OpenAI has enhanced accent recognition in popular foreign languages and improved conversational smoothness and speed. A refreshed design featuring a new animated blue sphere is part of the update to further enhance the experience of Advanced Voice Mode. Excluded from this launch are video and screen sharing features, although OpenAI has hinted at plans to introduce them in future updates.
[8]
ChatGPT's Advanced Voice Mode is set to roll out tomorrow -- here's what we know
An exciting news leak from OpenAI via X suggests a major new ChatGPT feature is coming as early as tomorrow. From what we know, the company will unveil Advanced Voice Mode to all Plus subscribers, providing a more interactive, conversational experience in real time. As a ChatGPT Plus user myself, I am practically giddy over the move to a more human-like AI interaction. The leak suggests that Advanced Voice Mode will initially apply to ChatGPT Plus subscribers -- those who pay $20 per month to access the enhanced features such as faster response times and priority access to new capabilities. Once launched, the feature will be accessible through the ChatGPT app, where users can opt in to activate voice input and choose from a variety of voice options. If you've used ChatGPT Voice, you already know that the conversation feels fairly realistic. Honestly, they aren't bad for a first pass. But I have noticed that they glitch a bit and like ChatGPT text, the conversation needs some prompting to get the correct response. I have high hopes for the possibilities of Advance Voice Mode. OpenAI has reportedly put substantial effort into fine-tuning the AI's voice creation including varying tones and inflections, to ensure the conversation with ChatGPT is more personal and immersive. I'm happy to know that the Advanced Voice Mode will integrate directly into the ChatGPT interface for a seamless transition between text and voice. Live, real-time conversations are not new; we have seen similar advancements from Gemini Live and are anticipating more from Apple Intelligence's Siri. As AI-powered assistants continue to deliver more sophisticated conversations, the rivalry between competitors means users can anticipate the humanlike interactions will continue to be fine-tuned. This development comes at a time when the race to create the most advanced conversational AI intensifies. With giants like Amazon, Apple and Google all integrating AI into their virtual assistants, OpenAI's Advanced Voice Mode could set ChatGPT apart as a more versatile tool. The company is leveraging its AI expertise to give ChatGPT users deeper engagement. I foresee this feature beginning a shift in how we interact with digital assistants. The hands-free chat capabilities are ideal for multi-tasking and for more natural accessibility. Advanced Voice Mode will be particularly useful for tasks that require a more conversational approach, such as setting reminders, answering questions that require complex answers, or even providing step-by-step instructions for everything from home repairs to recipes. As of now, the new voice feature will only be available to ChatGPT Plus subscribers, providing them with exclusive access to cutting-edge AI capabilities. I have no doubt that if the rollout is successful and user feedback is positive, OpenAI may eventually extend this feature to the free tier or further integrate new voice models into business and enterprise solutions. For those already paying for Plus, this addition enhances the value of their subscription by adding another layer of convenience and interactivity. Just one more sleep until ChatGPT Plus users get to try it!
[9]
Top Tech News: OpenAI Unveils New ChatGPT Feature, Advanced Voice Mode, Crypto Scammers Hijack OpenAI's Press Account on X for Malicious Gains
OpenAI via X plans to introduce Advanced Voice Mode to all its Plus subscribers, offering a more engaging and real-time conversation experience. The information leaked indicates that Advanced Voice Mode will start with ChatGPT Plus members, those who pay US$20 a month to access enhanced features like quicker replies and early access to new functionalities. After its launch, users will be able to access the feature through the ChatGPT app, where they can enable voice input and select from various voice options. This announcement comes when the competition to develop the most sophisticated chatbot AI is heating up. With major companies like Amazon, Apple, and Google incorporating AI into their voice assistants, OpenAI's Advanced Voice Mode could make ChatGPT stand out as a more adaptable tool. The company is using its AI knowledge to enhance the interaction experience for ChatGPT users.
[10]
ChatGPT's Advanced Voice Mode might roll out to select Plus users today
Key Takeaways ChatGPT's Advanced Voice Mode feature is allegedly being rolled out to select Plus users today. A wider release is likely to happen this fall; however, OpenAI has yet to confirm a date. Google has beat OpenAI to the punch, at least this time around, by launching Gemini Live ahead of its competitor. Ever since ChatGPT launched in late 2022, it has constantly added new features that have made it somewhat of a viral sensation among users. One of these features is the Advanced Voice Mode, which the company debuted in May of this year. This capability essentially allows ChatGPT to hold more natural voice-based conversations and respond to the user with some degree of "emotion." Naturally, most users are eager to try it out. Related Copilot Pro vs ChatGPT Plus: Which subscription is best for you? If you're looking to spend $20 on a subscription tier to a generative AI platform, then which is better? Copilot Pro and ChatGPT Plus are very similar OpenAI will allegedly open up the Advanced Voice Mode feature to more users today Now, a recent leak on X suggests that a select group of ChatGPT Plus users might be getting access to this feature as early as today (September 24). The leak in question includes an email from OpenAI that mentions "access to the Advanced Voice mode alpha on September 24, 2024, will depend on a variety of factors including but not limited to participation invitations and the specific criteria set for the alpha testing phase." The email also notes that Plus users are "among the first to receive access to new features." So, it appears that while some Plus users might be added to the alpha testing group today, others will have to wait until later in the fall for access to this feature. Although OpenAI has yet to confirm a date for the full release of Advanced Voice Mode, the addition of more users to the testing phase indicates a wider rollout might happen sooner rather than later. The timeline isn't surprising, given that OpenAI's competitor Google has already launched Gemini Live, the version of the chatbot with voice capabilities. While being a ChatGPT Plus member comes with several perks, we should note that the subscription itself isn't exactly cheap at $20 per month. However, it does give you access to a variety of LLMs and even early features like Advanced Voice Mode. It remains to be seen whether OpenAI will keep Advanced Voice Mode exclusive to Plus users or follow Google's approach with Gemini Live and make it available to all users for free. If you're enjoying testing these new AI features, be sure to check out the best AI applications that you can run on your PC.
[11]
OpenAI Set to Launch Advanced Voice Mode on ChatGPT Soon
OpenAI released GPT-4o at its latest Spring Update event earlier this year, winning hearts with its 'omni' capabilities across text, vision, and audio. OpenAI is set to launch 'Advanced Voice Mode' on ChatGPT this Tuesday, September 24, 2024, according to a screenshot posted by a user on X. "As of now, access to Advanced Voice mode is being rolled out in a limited alpha to a select group of users. While being a long-time Plus user and having been selected for SearchGPT are both indicators of your active engagement with our platform, access to the Advanced Voice mode alpha on September 24, 2024, will depend on a variety of factors including but not limited to participation invitations and the specific criteria set for the alpha testing phase," read the blog post attached in the screenshot. OpenAI released GPT-4o at its latest Spring Update event earlier this year, which won hearts with its 'omni' capabilities across text, vision, and audio. OpenAI's demos, which included a real-time translator, a coding assistant, an AI tutor, a friendly companion, a poet, and a singer, soon became the talk of the town. However, its Advanced Voice Mode wasn't released. When OpenAI recently released o1, one of them queried if they would be launching voice features soon. "How about a couple of weeks of gratitude for magic intelligence in the sky, and then you can have more toys soon?" replied Sam Altman, with a tinge of sarcasm. However, a couple of weeks later, Kyutai, a French non-profit AI research laboratory, launched Moshi, a real-time native multimodal foundational AI model capable of conversing with humans in real time, much like what OpenAI's advanced model was intended to do. Hume AI recently introduced EVI 2, a new foundational voice-to-voice AI model that promises to enhance human-like interactions. Available in beta, EVI 2 can engage in rapid, fluent conversations with users, interpreting tone and adapting its responses accordingly. The model supports a variety of personalities, accents, and speaking styles and includes multilingual capabilities. Meanwhile, Amazon Alexa is partnering with Anthropic to improve its conversational abilities, making interactions more natural and human-like. Earlier this year, Google launched Astra, an 'universal AI agent' built on the Gemini family of AI models. Astra features multimodal processing, enabling it to understand and respond to text, audio, video, and visual inputs simultaneously.
[12]
OpenAI released its advanced voice mode to more people. Here's how to get it.
OpenAI is broadening access to Advanced Voice Mode, a feature of ChatGPT that allows you to speak more naturally with the AI model. It allows you to interrupt its responses mid-sentence, and can also sense and interpret your emotions based on your tone of voice and adjust its responses accordingly. These features were teased back in May when OpenAI unveiled GPT-4o but they were not released until July -- and then only to an invite-only group. (At least initially, there seem to have been some safety issues with the model; OpenAI gave several WIRED reporters access to the voice mode back in May, but the magazine reported the company "pulled it the next morning, citing safety concerns.") Users who've been able to try it have largely described the model as an impressively fast, dynamic, and realistic voice assistant -- which has made its limited access particularly frustrating to some other OpenAI users.
Share
Share
Copy Link
OpenAI has rolled out an advanced voice mode for ChatGPT, allowing users to engage in verbal conversations with the AI. This feature is being gradually introduced to paid subscribers, starting with Plus and Enterprise users in the United States.
OpenAI, the company behind the popular AI chatbot ChatGPT, has announced the launch of an advanced voice mode feature. This new capability allows users to engage in verbal conversations with the AI, marking a significant step forward in human-AI interaction 1.
The voice mode is being gradually introduced to paid subscribers. Initially, it will be available to ChatGPT Plus and Enterprise users in the United States 2. OpenAI plans to expand access to all paid subscribers globally in the coming weeks 3.
The new voice feature utilizes OpenAI's text-to-speech technology, which can generate human-like voices. Users can choose from five different voice options, each with its own unique characteristics 4. The system is designed to understand and respond to natural language, allowing for more intuitive and conversational interactions.
This advancement in AI technology opens up new possibilities for various industries. From customer service to educational tools, the voice-enabled ChatGPT could revolutionize how we interact with AI assistants 5. It also raises questions about the future of human-AI relationships and the potential impact on jobs that involve voice communication.
As with any new AI technology, the introduction of voice capabilities to ChatGPT brings forth privacy and ethical concerns. OpenAI has stated that they do not retain audio data from user interactions, addressing some privacy worries 2. However, the increasingly human-like nature of AI interactions may blur the lines between human and machine communication, potentially leading to ethical dilemmas.
The launch of voice mode for ChatGPT is likely to spur further innovation in the AI industry. Competitors may rush to develop similar features, potentially leading to rapid advancements in voice-based AI technology. As these systems become more sophisticated, we can expect to see an increasing integration of AI assistants into our daily lives, both personally and professionally.
Reference
[4]
OpenAI has finally released its advanced voice feature for ChatGPT Plus and Team users, allowing for more natural conversations with the AI. The feature was initially paused due to concerns over potential misuse.
14 Sources
14 Sources
OpenAI launches a new voice-based interaction feature for ChatGPT Plus subscribers, allowing users to engage in conversations with the AI using voice commands and receive spoken responses.
29 Sources
29 Sources
OpenAI's ChatGPT introduces an advanced voice mode, sparking excitement and raising privacy concerns. The AI's ability to mimic voices and form emotional bonds with users has led to mixed reactions from experts and users alike.
5 Sources
5 Sources
OpenAI has begun rolling out its highly anticipated voice assistant to select ChatGPT Plus subscribers. The launch comes after a delay to address safety issues, marking a significant advancement in AI-powered voice technology.
5 Sources
5 Sources
OpenAI is preparing to release a highly anticipated voice mode for ChatGPT, allowing users to engage in verbal conversations with the AI. The feature will be available to Plus and Enterprise subscribers starting next week.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved