Curated by THEOUTPOST
On Thu, 26 Sept, 12:08 AM UTC
10 Sources
[1]
Meta unveils voice mode: You can now talk to and share photos with Meta AI
Menlo Park, California: Meta Platforms has extended multimodal voice and vision support to its personal AI assistant Meta AI, which will now be able to talk to users and tell what it sees in pictures. "Voice is going to be a way more natural way of interacting with AI than text," Mark Zuckerberg, chief executive of Meta, said while unveiling the voice mode, which features some of the iconic voices like Awkwafina, John Cena and Kristen Bell. Meta also announced a new version of its mixed reality headset called the Meta Quest 3S and unveiled the prototype of the world's first holographic augmented-reality glasses called Orion at the Meta Connect event at the company's headquarters in Menlo Park, California on Wednesday. For developers, the AI company launched Llama 3.2 version of language and vision models and its smallest model ever, which could run on mobile devices. Also Read | New AI tools, AR glasses and more: everything Meta announced at its Connect event Here's a rundown of key announcements made during the company's annual conference: Back in September 2023, Zuckerberg first revealed the Meta AI chatbot to offer text and image generation capabilities helping users in productivity and efficiency gains besides creative assistance. Its deeper integration with social media platforms has enabled the chatbot to scale faster than any other personal assistant. Over 400 million people are using Meta AI, out of which 185 million people are using it across Messenger, Facebook, WhatsApp and Instagram each week, the company said. "Meta AI is on track to being the most used AI assistant by the end of this year," Zuckerberg said during his keynote address while announcing the addition of voice and vision support going forward. You can now talk to AI, ask questions, reply to your photos you share in chat, tell you more about what's in them, or even edit them, he said. The voice feature will roll out in the US, Canada, Australia and New Zealand over the next month. On Instagram, Facebook and Messenger, users can also experiment with automatic video dubbing and lip syncing on Reels in their preferred language. Meta is leading the open-source revolution of AI large language models. Its Llama series of AI models became the fastest growing open-source family of models securing 350 million downloads globally on Hugging Face in August this year. Monthly token usage had grown 10x from January to July 2024. Zuckerberg said that they are at an inflexion point in AI where proprietary model companies are aggressively slashing prices to compete with the open-source Llama models. Enterprises also believe open-source AI to be more trustworthy and flexible, which can be hosted in protected environments, he said. Meta, on Wednesday, introduced the Llama 3.2 models, its first major vision models (11 billion and 90 billion parameters in size), which means it understands both images and text. It also released its smallest ever model (1 billion parameter), which can run on edge and mobile devices without the need to send data to the cloud. At the Meta headquarters in Menlo Park, the hall packed with an audience of 3,000-odd people sent roars when Zuckerberg unveiled Orion -- world's first holographic augmented reality glasses. "It's a fully functioning prototype of the most advanced glasses the world has ever seen," he said. "If you want to be with someone who is far away, they're going to be able to teleport as a hologram into your living room, and sit right there with you. You're going to be able to tap your fingers and bring up a game of cards or chess or holographic ping pong or whatever it is you want to do together, work, play or whatever." He said AR/VR glasses will be the next generation of computing and human communication. But Orion is still a prototype and Meta wants to accomplish certain goals before releasing it as a consumer product. "We're going to keep tuning the display system to make it sharper. I want to keep working on the design to make it smaller and a bit more fashionable. We need to keep working on the manufacturing to make it a lot more affordable too." On Wednesday, the company also launched the Meta Quest 3S headset at $299.99 price point with pre-orders starting September 25. (The reporter is in Menlo Park to cover the Meta Connect event at the invitation of Meta Platforms.)
[2]
Meta takes some big AI swings at Meta Connect 2024
Also: Everything announced at Meta Connect 2024: $299 Quest 3S, Orion AR glasses, and more In this article, though, we'll dig into several powerful and impressive announcements related to the company's AI efforts. Zuckerberg announced the availability of Llama 3.2, which adds multimodal capabilities. In particular, the model can understand images. He compared Meta's Llama 3.2 large language models with other LLMs, saying Meta "Differentiates itself in this category by offering not only state of the art models, but unlimited access to those models for free, and integrated easily into our different products and apps." Also: Meta inches toward open-source AI Meta AI is Meta's AI assistant, now based on Llama 3.2. Zuckerberg stated Meta is on track to be the most used AI assistant globally, having almost 500 million monthly active users. To demonstrate the model's understanding of images, Zuckerberg opened an image on a mobile device using the company's image-edit capability. Meta AI was able to change the image, modifying a shirt to tie-dye or adding a helmet, all in response to simple text prompts. Meta's AI assistant is now able to hold voice conversations with you from within Meta's apps. I've been using a similar feature in ChatGPT and found it useful when two or more people need to hear the answer to a question. Zuckerberg claims that AI voice interaction will be bigger than text chatbots, and I agree -- with one caveat. Getting to the voice interaction has to be easy. For example, to ask Alexa a question, you simply speak into the room. But to ask ChatGPT a question on the iPhone, you have to unlock the phone, go into the ChatGPT app, and then enable the feature. Also: AI voice generators: What they can do and how they work Until Meta has devices that just naturally listen for speech, I fear even the most capable voice assistants will be constrained by inconvenience. You can also give your AI assistant a celebrity voice. Choose from John Cena, Judi Dench, Kristen Bell, Keegan-Michael Key, and Awkwafina. Natural voice conversation will be available in Instagram, WhatsApp, and Messenger Facebook and is rolling out today. Next up are some features Meta has added to its AI Studio chatbot creation tool. AI Studio lets you create a character (either an AI based on your interests or an AI that "is an extension of you"). Essentially, you can create a chatbot that mirrors your conversational style. But now Meta is diving into the realm of uncanny valley deepfakes. AI Studio, until this announcement, contained a text-based interface. But Meta is releasing a version that is "more natural, embodied, interactive." And when it comes to "embodied", they're not kidding around. In the demo, Zuckerberg interacted with a chatbot modeled on creator Don Allen Stevenson III. This interaction appeared to be a "live" video of Stevenson, full and completely tracking head motion and lip animations. Basically, he could ask Robot Don a question and it looked like the real guy was answering. Also: How Apple, Google, and Microsoft can save us from AI deepfakes Powerful, freaky, and unnerving. Plus, the potential for creating malicious chatbots using other folks' faces seems a distinct possibility. Meta seems to have artificial lip-synch and facial movements tied down. They've reached a point where they can make a real person's face move and speak generated words. Meta has extended this capability to translation. They now offer automatic video dubbing on Reels, in English and Spanish. That feature means you can record a Reel in Spanish, and the social will play it back in English -- and it will look like you're speaking English. Or you can record in English and it will play back in Spanish, as if you're speaking in Spanish. In the above example, creator Ivan Acuña spoke in Spanish, but the dub came back in English. As with the previous example, the video was nearly perfect and it looked like Acuña had been recorded speaking English originally. Zuckerberg came back for another dip into the Llama 3.2 model. He said the multimodal nature of the model has increased the parameter count considerably. Another interesting part of the announcement was the much smaller 1B and 3B models optimized to work on-device. This effort will allow developers to create more secure and specialized models for custom apps, that live right in the app. Also: I've tested dozens of AI chatbots since ChatGPT's stunning debut. Here's my top pick Both of these models are open source, and Zuckerberg was touting the idea that Llama is becoming "the Linux of the AI industry". Finally, a bunch more AI features were announced for Meta's AI glasses. We have another article that goes into those features in detail.
[3]
Meta AI Can Now See and Speak to You
Katelyn is a writer with CNET covering social media, AI and online services. She graduated from the University of North Carolina at Chapel Hill with a degree in media and journalism. You can often find her with a novel and an iced coffee during her time off. Today's Meta Connect event was chock full AI news, including AI integrations to the new Meta Quest S3 mixed reality headset and its virtual assistant Meta AI. Meta AI is getting new sight and voice abilities, including celebrity voices and photo editing tools. Meta is also testing integrating AI-generated content into Instagram and Facebook feeds, specifically tailored to users' interests. You'll be able to use Meta AI's voice on its social platforms: Instagram, Facebook, WhatsApp and Messenger. Meta said that the new voice feature is beginning to roll out today in the US, Canada, New Zealand and Australia. If you can't access the feature yet, don't panic -- the rollout will continue for the next month. Meta tapped a few celebrities to lend their voices and help bring some humanity into its AI voice. You can choose between the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key, and Kristen Bell for your Meta AI. Meta AI is also getting upgraded visual tech. Now when you upload a photo to Meta AI, it can tell you what's in them, like identifying a specific type of animal or helping break down a recipe. Meta also said you can use its AI assistant to help edit photos, including removing background elements, adding new elements and changing the background. Instagram is also getting some AI attention, as Meta said it is using AI to improve video dubbing and lip syncing so its Reels can be translated and accessible in more languages outside of its original post. Meta also said it's testing "imagined" -- meaning AI-generated -- content into Instagram and Facebook feeds. Meta said the images will be tailored to individual users' interests. Meta currently adds a label to AI-generated content. This is a developing story. Be sure to check back for more updates.
[4]
MetaAI Voice is the latest voice assistant to launch -- here's how it stacks up
Unlike Siri and Alexa, MetaAI Voice sits firmly in the conversational category and there is a good reason for that -- the company needed a better way for people to interact with its Ray-Ban smart glasses, Quest VR headsets, and general devices without access to a keyboard or touch screen. Conversational AI voice allows you to talk to the AI in natural language as if you were talking to a human. It allows it to handle complex and vague queries. For example, in the Meta Connect Demo Mark Zuckerberg suggested holding an avocado up to the Meta Ray-Ban smart glasses and saying "What can I make with this?" without specifying the nature of "this". Meta has done something Google and OpenAI haven't, though. It offers up the voices of the famous instead of an unnamed actor or generated voice. Initially, you'll be able to converse with an AI that sounds like Dame Judi Dench, John Cena, Kristen Bell, and more. Unfortunately, the quality of synthetic voice isn't up there with Gemini or ChatGPT Voice but you can interrupt it mid-flow and ask it the same level of natural queries. It is accessible on WhatsApp, Facebook Messenger and Instagram. While MetaAI Voice might be less realistic and natural than ChatGPT Advanced Voice, the one thing it has in its favor is the Meta ecosystem. More than three billion people around the world use at least one of Meta's core products every day. MetaAI has over 400 million active monthly users and it is only really available in the States. The text-based version is there within all the core products and looks the same whether you open it in WhatsApp, Instagram, Facebook or Messenger. Right now you can use it to generate images, have a text-based conversation and even play games. With voice, you'll be able to leave it on the desk and chat away as you go about other tasks. MetaAI also now uses Llama 3.2 90b as its "brain". This is a new multimodal model from Meta that can analyze images as well as text. It is likely future versions will also be able to work with more sounds, documents and even video -- if it matches the progress of OpenAI's models. This means that, at the touch of a button in any of the apps you use everyday, you'll be able to start talking to an AI. You'll be able to give it a photo you've just taken, ask it for details of the image or to change an aspect of the image such as removing an unsightly trash can. The real power of MetaAI Voice will be felt by those wearing the Ray-Ban Smart Glasses or a Quest headset. These devices will be able to see the world as you do and allow you to talk to the AI about anything you see in real-time.
[5]
Meta AI gets a bunch of free upgrades: Voice, vision and auto-dubbing
In the race to make truly useful AI for a mass audience, Meta just jumped forward a few key steps -- including AI's ability to "see" objects and provide live, lip-synched translations. At the Meta Connect developers' conference, CEO Mark Zuckerberg unveiled the latest version of Llama. That's the open-source Large Language Model (LLM) powering the AI chatbot in the company's main services: Facebook, WhatsApp, Messenger, and Instagram. Given that reach, Zuckerberg described Meta AI as "the most-used AI assistant in the world, probably," with about 500 million active users. The service won't be available in the European Union yet, given that Meta hasn't joined the EU's AI pact, but Zuckerberg said he remains "eternally optimistic that we can figure that out." He's also optimistic that the open-source Llama -- a contrast to Google's Gemini and OpenAI's GPT, both proprietary closed systems -- will become the industry standard. "Open source is the most cost-effective and the most customizable," Zuckerberg said. Llama is "sort of the Linux of AI." But what can you do with it? "It can understand images as well as text," Zuckerberg added -- showing how a photo could be manipulated simply by asking the Llama chatbot to make edits. "My family now spends a lot of time taking photos and making them more ridiculous." Voice chat is now rolling out to all versions of Meta AI, including voices from celebrities such as Judi Dench, John Cena and Awkafina. Another user-friendly update: When using Meta AI's voice assistant with its glasses, you no longer have to use the words "hey Meta" or "look and tell me." Zuckerberg and his executives also demonstrated a number of use cases. For example, a user can set up Meta AI to provide pre-recorded responses to frequently asked questions over video. You can use it to remember where you parked. Or you can ask it to suggest items in your room that might help to accessorize a dress. The most notable, and possibly most useful feature: live translation. Currently available in Spanish, French, Italian and English, the AI will automatically repeat what the other person said in your chosen language. Zuckerberg, who admitted that he doesn't really know Spanish, demonstrated this feature by having an awkward conversation live on stage with UFC fighter Brandon Moreno. Slightly more impressive was the live translation option on Reels, and other Meta videos. The AI will synchronize the speakers' lips so they look like they're actually speaking the language you're hearing. Nothing creepy about that at all.
[6]
Meta AI can now talk to you and edit your photos
Over the last year, Meta has made its AI assistant so ubiquitous in its apps it's almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful. One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year's Meta AI launch, the company tapped a group of celebrities for the change. Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI's new abilities, it's worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year's Connect. In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item. The new abilities arrive alongside the company's latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can "bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story." Llama 3.2 is "competitive" on "image recognition and a range of visual understanding tasks" compared with similar offerings from ChatGPT and Claude, Meta says. The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with "automatic dubbing and lip syncing." According to Meta, that "will simulate the speaker's voice in another language and sync their lips to match." It will arrive first to "some creators' videos" in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing. Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users' interests and past activity. For example, Meta AI could surface an image "imagined for you" that features your face.
[7]
How Meta's AI Advancements May Impact Social Commerce | PYMNTS.com
Meta's latest AI upgrades, unveiled at its annual Connect conference, could change online shopping through voice-activated assistants and image-recognition technology on social media platforms. The tech giant reported that over 400 million people use Meta AI monthly, with 185 million engaging weekly across its products. Meta claims its AI assistant will become the most used globally by the end of the year. "AI-generated images and captions can supercharge social media marketing. Brands can make content that feels custom-made for each user, at a large scale," Mike Vannelli, an industry expert, told PYMNTS. He added, "AI can analyze what users like and help businesses make targeted campaigns. This leads to more engagement and better returns on investment." Meta announced Llama 3.2, a big advancement in its open-source AI model series, alongside its consumer-facing updates. This new release includes small and medium-sized vision language models (11B and 90B parameters) and lightweight, text-only models (1B and 3B parameters) designed for edge and mobile devices. The vision models can analyze images, understand charts and graphs, and perform visual grounding tasks. The lightweight models, optimized for on-device use, support multilingual text generation and tool-calling abilities, enabling developers to build personalized applications supposedly prioritizing user privacy. New features include voice interaction capabilities. Users can now talk to Meta AI on Messenger, Facebook, WhatsApp and Instagram DM and get spoken responses. Meta is rolling out various voice options, including AI voices of celebrities like Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key, and Kristen Bell. Owais Rawda, senior account manager at Z2C Limited, told PYMNTS, "Voice-interactive AI creates a more personal customer experience. It gives quick answers, making shopping easier." Users can also share photos with Meta AI for analysis and editing. The AI can identify objects in images, answer questions, and edit pictures on command. For example, users can ask Meta AI to identify a flower in a hiking photo or get cooking instructions for a dish they've photographed. Meta AI's new editing capabilities allow users to request photo changes, from altering outfits to replacing backgrounds. The company is also testing an AI translation tool for Reels that automatically translates audio and synchronizes lips in videos, starting with English and Spanish. Meta is expanding its business AI tools to companies using click-to-message ads in English on WhatsApp and Messenger. These AI agents can chat with customers, offer help, and assist with purchases. The company said ad campaigns using AI features got 11% more clicks and 7.6% more conversions than regular campaigns. Over a million advertisers are using these tools, making 15 million ads in the past month. Vannelli highlighted changes in customer service: "Meta AI makes shopping smoother. Customers don't have to switch between pages or wait for a human to respond." Meta is also enhancing its Imagine feature, allowing users to create AI-generated images of themselves as superheroes or in other scenarios directly in their feeds, Stories and Facebook profile pictures. These images can be easily shared and replicated by friends. As Meta refines its AI offerings, its approach to data use, transparency, and user control will be crucial in shaping the adoption and success of these new features. These AI advances represent a big step in Meta's strategy to integrate AI into its core products, potentially reshaping how businesses and consumers interact in the digital marketplace.
[8]
Meta AI Gets a Huge Upgrade; Voice Chat and AI Photo Editing are Here
With the release of new Llama 3.2 multimodal models -- 11B and 90B -- Meta has unlocked new use cases for its Meta AI chatbot. At the Meta Connect 2024 event, the company announced several new features for Meta AI that allow users to interact with various modalities like audio and images. First and foremost, you can now talk to Meta AI using voice and it will reply out loud. You can continue the conversation and ask questions on any topic. The best part is that it can find even the latest information by browsing the internet. It's not as conversational as Gemini Live and ChatGPT Advanced Voice, but you get a standard two-way voice chat interface. There is no support for interruptions, though. The timing of this announcement couldn't be any better as ChatGPT Advanced Voice Mode started rolling out to users today. Meta Voice chat is available through the Meta AI chatbot on WhatsApp, Facebook, Messenger, and Instagram DM. There are different AI voices available and you can even choose the voice of public figures such as John Cena, Keegan Michael Key, Awkwafina, Dame Judi Dench, and Kristen Bell. Since Meta AI is now powered by Llama 3.2 11B and 90B multimodal models, you can upload an image and ask Meta AI to analyze it. For instance, you can upload an image of a mountain, ask where it is located, and find more information along the way. You can also choose to upload charts and diagrams and infer meaning from your visual input. Next, Meta AI brings AI photo editing to its social media apps. You can upload an image and ask the AI chatbot to change the background, erase unwanted objects, change outfits, and much more. Basically, AI photo editing is now readily available on Meta's social stack, including WhatsApp, FB Messenger, Instagram, and Facebook. It works similarly to Google's Magic Editor, but it's available within your social media apps and you can seamlessly share them as stories. Best of all, the Reimagine AI tool now lets you create AI-generated images of yourself. You can reimagine your photos from feed, stories, and Facebook profile pictures by simply adding a prompt, and Meta AI will instantly generate an image based on your prompt. It means that you don't have to train your images using LoRA to create AI-generated images of yourself. My colleague and boss Devinder is in Palo Alto attending Meta Connect 2024. He got a chance to go hands-on with this new Meta AI capability in WhatsApp and generate some cool photos. Last but not least, one of the promising features of Meta AI is the automatic translation of Reels. If a creator has published a Reel in a foreign language that you don't understand, Meta AI will translate the audio into your language automatically with perfect lip-syncing. Currently, the feature is limited to Latin America and the US in English and Spanish. Meta says the feature will be expanded to more regions and languages pretty soon. Next, Facebook and Instagram users may see Meta AI-generated images in the user feed based on user interest or current trends. You can also tweak the prompt to generate new content in your feed. And finally, users will be able to personalize themes using AI in their private DMs. So these are the new Meta AI features coming to WhatsApp, Instagram, Facebook, and Messenger. Are you excited to check them out? Let us know in the comments below.
[9]
I spoke to Meta AI and, yes, it can carry on a conversation
Face it, no AI is complete until it has a voice, and now Meta AI has one and is ready to engage with you on your favorite Meta platform, including WhatsApp, Facebook, Instagram, and Messenger. I had a chance to try Meta's new Llama-powered chatty capabilities at Meta Connect 2024, which is taking place September 24-26 at the iconic 1 Hacker Way Meta headquarters. Admittedly, the conditions were suboptimal. Meta AI was on a phone out on the Meta Campus, where people milled about and helicopters buzzed overhead. A Meta employee told me the phone's mic was on, and Meta was listening, and I stood before it talking. "So I can just talk to it?" I asked. Meta piped up, "Yes you can. I'm here to help. Just say what's on your mind, and I'll do my best to assist you. Go ahead." Suddenly flummoxed, I couldn't think of a question, so I asked Meta AI if it knew where New York City is," and it immediately gave me a detailed answer. As Meta AI talked, I interrupted and told it I was thinking of moving there, but I didn't know the best place. "That's exciting," Meta AI responded and began outlining the city's five Burroughs. I interrupted again and told Meta AI I was considering Manhattan. Without missing a beat, meta AI told them Manhattan features diversity. Everything Meta AI said also appeared in text on the screen. I asked Meta AI if it thought I could get a condo in Harlem for under $500,000. To my surprise, it said yes and gave me detailed examples. At this point, there was a bit too much sound interference, and Meta AI did not hear me when I asked about a moving company or when I asked it to stop responding. It really seemed to enjoy going through Harlem condo opportunities. By turning off the speaker for a second, we were able to regain control of Meta AI, which quickly gave me some moving company capabilities. Even with that glitch at the end, this was an impressive little demo. Meta AI's speech capabilities are smart, understand context, and can pivot if you interrupt. Meta AI's speech capabilities are rolling out now in the US, Canada, Australia, and New Zealand. It can chat, tell stories, and figure things out, like where to move and how to find a home within your price range.
[10]
AI Voice Tools Bring 'Human' Touch to eCommerce, With Meta and OpenAI Leading the Way | PYMNTS.com
OpenAI and Meta have introduced new artificial intelligence (AI) voice features that could reshape how consumers interact with brands. Meta's upgrade includes celebrity voices like Judi Dench and John Cena, while OpenAI has rolled out enhanced voice capabilities for its ChatGPT users. Experts say these innovations promise more natural and personalized interactions, a shift in the eCommerce space. "This isn't just about convenience -- it's about creating real, human connections between brands and customers," Valentin Radu, founder of Omniconvert, told PYMNTS. Nearly half of U.S. consumers expect voice assistants to match human intelligence and reliability within five years, with many indicating they'd be willing to pay for such services, according to a recent PYMNTS survey of 2,939 people. Meta's AI assistants will now use celebrity voices, a move expected to increase user engagement by making interactions more relatable and fun. However, the bigger update involves the ability to process visual information, such as user-uploaded photos. This feature opens new doors for social commerce, allowing users to upload a picture of a product and receive instant, AI-driven recommendations or purchase options. Radu sees this shift as a natural next step. "AI voice features provide a seamless, personal experience," Radu said, noting that the technology creates a more intuitive shopping experience by offering immediate, tailored responses, reducing friction for the consumer. Imagine a shopper taking a photo of a dress, uploading it to Instagram, and receiving voice-guided purchase suggestions in real time. Introducing celebrity voices isn't just for entertainment -- it adds a human touch to the interaction, making digital assistants feel more personable. The shift could lead to higher conversion rates and more meaningful brand-customer relationships. OpenAI has also advanced its voice capabilities, making conversations with the ChatGPT bot more fluid. This update aims to provide smoother, more natural communication, particularly useful in customer service and eCommerce environments where timely, accurate responses are essential. Premium ChatGPT users will be the first to try it, with access expanding to enterprise customers soon. Simona Vasytė-Kudakauskė, CEO of Perfection42, sees these developments as a game-changer for customer service in eCommerce. "Voice-based AI can handle interactions from problem identification to refund processing," she told PYMNTS. Voice AI could also help companies reengage with shoppers. Instead of sending abandoned-cart emails, voice assistants could call customers directly, offering a more personal nudge to complete their purchases. Meta and OpenAI's introduction of AI voice tools has set the stage for a new race in the eCommerce world. Both companies are positioning themselves as leaders in this space, where personalized voice experiences could become the norm. Integrating voice into social media platforms could give Meta an edge, particularly for platforms like Instagram and Facebook, where users already engage in shopping-related activities. "Platforms that seamlessly connect voice features with user behavior will have the upper hand," Radu predicted. He envisions AI voice becoming the preferred method for users to interact with brands and purchase directly from their social media feeds. OpenAI, on the other hand, is likely to dominate customer service, observers say. Its voice-enabled ChatGPT can handle customer inquiries, offer personalized product recommendations, and even process orders. Vasytė-Kudakauskė believes companies will gravitate toward the platform that best integrates AI voice into their customer service pipeline, creating smoother, faster interactions. AI voice features are changing the eCommerce landscape, from social media shopping to streamlined customer service. Meta and OpenAI are betting on these tools as the next step in digital commerce.
Share
Share
Copy Link
Meta has introduced a voice mode for its AI assistant, allowing users to engage in conversations and share photos. This update, along with other AI advancements, marks a significant step in Meta's AI strategy across its platforms.
Meta, the parent company of Facebook, Instagram, and WhatsApp, has unveiled a new voice mode for its AI assistant, marking a significant advancement in user interaction across its platforms 1. This feature allows users to engage in voice conversations with Meta AI, enhancing the accessibility and convenience of AI-powered assistance.
In addition to voice interaction, Meta AI now boasts improved visual recognition capabilities. Users can share photos with the AI, which can then analyze and discuss the contents of the images 3. This feature opens up new possibilities for creative expression and information sharing within Meta's ecosystem.
The voice mode for Meta AI is being rolled out gradually, starting with a beta release in the United States. It will be available on various Meta platforms, including WhatsApp, Messenger, and Instagram 4. This integration aims to provide a seamless AI-assisted experience across Meta's family of apps.
Meta's AI developments extend beyond voice and visual recognition. The company has announced plans for Llama 3, the next iteration of its large language model 5. This update promises improved performance and capabilities, further solidifying Meta's position in the competitive AI landscape.
These AI advancements play a crucial role in Meta's broader strategy for the future of social interaction and computing. At Meta Connect 2024, the company showcased how AI will be integrated into various aspects of its platforms, including virtual and augmented reality experiences 2.
As Meta expands its AI capabilities, questions about privacy and data usage arise. The company assures users that interactions with Meta AI are designed with privacy in mind, although specific details about data handling and user consent mechanisms remain to be fully clarified 1.
Meta's introduction of voice-enabled AI assistance puts it in direct competition with established players like Apple's Siri, Google Assistant, and Amazon's Alexa 4. The integration of this technology across Meta's popular social platforms could potentially give it a unique advantage in user engagement and data collection.
Reference
Meta has launched a new AI feature that incorporates celebrity voices, including Judi Dench and John Cena. The update also includes advanced translation capabilities and expanded availability.
16 Sources
Meta is set to introduce AI-powered features to its Quest 3 VR headset, including advanced chatbot capabilities and computer vision. This move positions Meta to compete directly with Apple's upcoming Vision Pro headset.
9 Sources
Meta showcases its latest innovations in virtual and augmented reality technology, including a more affordable VR headset, AI advancements, and a prototype of holographic AR glasses, signaling the company's continued push into the metaverse.
38 Sources
Meta's AI assistant surpasses 500 million users within a year of launch, showcasing the company's strong position in consumer AI and its impact on user engagement across its platforms.
3 Sources
Meta's Ray-Ban smart glasses receive a significant AI update, introducing multimodal features that enhance user interaction and functionality, potentially revolutionizing the smart glasses market.
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved