Curated by THEOUTPOST
On Tue, 1 Oct, 4:02 PM UTC
15 Sources
[1]
Meta confirms new Ray-Ban Meta AI glasses can harvest what wearers see
Meta has just released a new pair of AI-powered sunglasses with Ray-Ban, and now we have an idea of what Meta plans on doing with the images and videos captured by the glasses. For those who don't know, Meta's AI-powered Ray-Bans have a camera located in front of the glasses. The camera can be used for taking photos and video, but it can also be enabled when the user initiates an AI feature through saying a keyword such as "look" and then requesting Meta AI to analyze what the wearer is seeing and provide an answer. An example of this would be looking at a mountain and asking MetaAI what the name of that mountain is its height. When prompted, the Meta Ray-Bans will then capture a selection of images that will be scanned by MetaAI, and the answer will be read out loud to the wearer via the speakers. However, what happens to the captured images? TechCrunch queried Meta on this and discovered the company was being cagey about the process of using captured images and video, but since then it has provided more clarity. According to Meta policy communications manager Emil Vazquez, who sent an email to TechCrunch, "[I]n locations where multimodal AI is available (currently US and Canada), images and videos shared with Meta AI may be used to improve it per our Privacy Policy." Previously, Meta stated to the publication that Meta does not use photos and videos captured on Ray-Ban Meta for training purposes if the user chooses not to submit them to AI. However, once the user asks Meta AI to scan any images or video that content falls under a completely different set of policies, meaning they are eligible for training purposes. What does this mean? Any wearers of Meta's glasses are participating in the company accruing a mountainous stockpile of data, and perhaps what is more nefarious is buyers of the glasses may not be aware all of their images and videos are being used to create more sophisticated Meta AI models. According to Meta, the guidelines for using its Meta AI features are clear within the Ray-Ban Meta user interface. But, what I would consider a healthy piece of criticism is the lack of public explanation of the potential privacy concerns by Meta themselves, which should come with assurances and transparency for customers. The introduction of smart glasses that can be used to record the world around the wearer, that is then used for further AI training, also introduces the problem of consent for people around the wearer, especially when Meta's AI glasses have already been hotwired into a device capable of revealing the name, address, and phone number of any person they are pointed at.
[2]
Meta confirms it may train its AI on any image you ask Ray-Ban Meta AI to analyze
We recently asked Meta if it trains AI on photos and videos that users take on the Ray-Ban Meta smart glasses. The company originally didn't have much to say. Since then, Meta has offered TechCrunch a bit more color. In short, any image you share with Meta AI can be used to train its AI. "[I]n locations where multimodal AI is available (currently US and Canada), images and videos shared with Meta AI may be used to improve it per our Privacy Policy," said Meta policy communications manager Emil Vazquez in an email to TechCrunch. In a previous emailed statement, a spokesperson clarified that photos and videos captured on Ray-Ban Meta are not used by Meta for training as long as the user doesn't submit them to AI. However, once you ask Meta AI to analyze them, those photos fall under a completely different set of policies. In other words, the company is using its first consumer AI device to create a massive stockpile of data that could be used to create ever-more powerful generations of AI models. The only way to "opt out" is to simply not use Meta's multimodal AI features in the first place. The implications are concerning because Ray-Ban Meta users may not understand they're giving Meta tons of images - perhaps showing the inside of their homes, loved ones, or personal files - to train its new AI models. Meta's spokespeople tell me this is clear in the Ray-Ban Meta's user interface, but the company's executives either initially didn't know or didn't want to share these details with TechCrunch. We already knew Meta trains its Llama AI models on everything Americans post publicly on Instagram and Facebook. But now, Meta has expanded this definition of "publicly available data" to anything people look at through its smart glasses and ask its AI chatbot to analyze. This is particularly relevant now. On Wednesday, Meta started rolling out new AI features that make it easier for Ray-Ban Meta users to invoke Meta AI in more natural way, meaning users will be more likely to send it new data that can also be used for training. In addition, the company announced a new live video analysis feature for Ray-Ban Meta during its 2024 Connect conference last week, which essentially sends a continuous stream of images into Meta's multimodal AI models. In a promotional video, Meta said you could use the feature to look around your closet, analyze the whole thing with AI, and pick out an outfit. What the company doesn't promote is that you are also sending these images to Meta for model training. Meta spokespeople pointed TechCrunch towards its privacy policy, which plainly states: "your interactions with AI features can be used to train AI models." This seems to include images shared with Meta AI through the Ray-Bans smart glasses, but Meta still wouldn't clarify. Spokespeople also pointed TechCrunch towards Meta AI's terms of service, which states that by sharing images with Meta AI, "you agree that Meta will analyze those images, including facial features, using AI." Meta just paid the state of Texas $1.4 billion to settle a court case related to the company's use of facial recognition software. That case was over a Facebook feature rolled out in 2011 called "Tag Suggestions." By 2021, Facebook made the feature explicitly opt-in, and deleted billions of people's biometric information it had collected. Notably, several of Meta AI's image features are not being released in Texas. Elsewhere in Meta's privacy polices, the company states that it also stores all the transcriptions of your voice conversations with Ray-Ban Meta, by default, to train future AI models. As for the actual voice recordings, there is a way to opt-out. When you first login to the Ray-Ban Meta app, users can choose whether voice recordings can be used to train Meta's AI models. It's clear that Meta, Snap, and other tech companies are pushing for smart glasses as a new computing form factor. All of these devices feature cameras that people wear on their face, and they're mostly powered by AI. This rehashes a ton of privacy concerns we first heard about in the Google Glass era. 404 Media reported that some college students have already hacked the Ray-Ban Meta glasses to reveal the name, address, and phone number of anyone they look at.
[3]
Watch out - your Ray-Ban smart glasses photos are helping to train Meta AI
If you use your Ray-Ban Meta smart glasses all the time you might want to be careful of what you're snapping pictures of, and what you're asking Meta AI, through them, as Meta has confirmed that it may use these visual and audio inputs to train its smart assistant. That's by its own admission, in a statement it sent to TechCrunch in which Meta's policy communications manager Emil Vazquez explained that "Images and videos shared with Meta AI may be used to improve it per our Privacy Policy." It's worth highlighting that Meta only trains its AI on images and videos that you share with it - such as through the Look and Ask feature which has the glasses take a picture which it uses to contextualize a request like "Look and tell more about this landmark" or "Look and translate this sign." So if you live in an area that doesn't yet have access to Meta AI (i.e. outside the US and Canada) or you simply never interact with the Ray-Ban smart glasses' AI analysis tools, then your snaps should be staying private; that is, unless you post the image on Facebook or Instagram and you live in a region where Meta now has permission to trains its AI on your posts. Unfortunately there's no way to use the AI image analysis and also keep your submitted pictures private. You have to consent to sharing your images to opt in to the feature, and you can't currently opt out beyond stopping using AI analysis. While I feel that there's something distinctly off-putting about Meta using my pictures to train its AI, this news isn't all that surprising. Other AI creators openly train their assistants on user inputs, and given how much Google and Apple have hyped up the privacy of their own on-device AI the Ray-Ban glasses' reliance on a cloud-based AI is clearly going to involve the sharing of data. Also for anyone confused about me saying my snaps have probably trained Meta's AI, even though I live in the UK I have access to Meta AI on my Ray-Bans (somehow, I think it might have something to do with my VPN) - I've used it quite a lot, so I've likely also agreed to the Privacy Policy giving Meta permission to use my submitted images for training purposes. I guess the difference between using, say, ChatGPT to analyze an image and using the glasses is that you aren't always wearing ChatGPT on your face. Even with all the safeguards - you can turn the glasses off completely with an on-device switch, and the AI only uses the images you choose to feed it - I feel this news still adds another layer of concern for users. And for smart glasses like the newly announced Meta Orion AR glasses to take off, these layers need to be peeled back not added to. Because while most of us do carry around much of the same tech now in smartphones, there's a big psychological difference between a handset and something you're always wearing. It's also becoming easier to activate the AI with more natural speech. While this is handy for people who want to use the Meta assistant, it does open up the possibility that people may share images that didn't intend to if they aren't careful. We'll have to see what measures Meta introduces to better alert users about how their data is used by AI - and perhaps offer more comprehensive opt-out options that don't strip away functionality. For now, we recommend being a little more careful what you share with your Ray-Ban Meta smart glasses, and other AI for that matter, as it might not be as private a conversation as you thought.
[4]
Your Ray-Ban Meta glasses are feeding Meta's AI -- Here's how
Meta has confirmed that any images taken through it can also be used to train its AI models. The company initially dodged the question and then responded to TechCrunch, saying that while photos and videos taken on the Ray-Ban Meta glasses can't be used for training unless they're sent to AI, once Meta AI is asked to analyze them, those images' fall under different policies and can be used for AI training.' In an email to TechCrunch, Meta's policy communications manager Emil Vazquez explained that images and videos shared with Meta AI in the U.S. and Canada "may be used to improve it," as stated in the company's privacy policy. That means whenever you ask the AI to analyze your surroundings, you pass data to Meta that it can use to improve its AI models. The reveal is especially worrying considering the new, easy-to-use AI features rolled out with Ray-Ban Meta glasses. Now, the AI can analyze real-time streams, such as searching through a closet to suggest what to wear, but even those images will also be sent over to Meta to train the AI model. But as users begin to interact with these smart glasses, it's unclear that they're also lending Meta access to personal spaces, loved ones, or sensitive data for its AI development. There is no way around this other than not using Meta's multimodal AI features. Meta says that interactions with the AI feature can be used to train models, but this isn't always indicated in the user interface. As smart glasses' tails wind into privacy concerns, those concerns echo those surrounding Google Glass, but now with edge AI at their core. Meta, pushing its AI-powered wearables, asks: How far are users willing to go, knowingly or unknowingly, to continue a generation of AI models?
[5]
Meta will use pictures and voice recorded by its Google Glass-style Ray Ban 'smart glasses' to train AI
Using the glasses' "Meta AI" featuresā -- a main selling point of the deviceā -- on an image makes it fair game for the company to hoover up. In today's installment of the AI boom turning privacy into a quaint anachronism cherished by people born before the year 2000, Facebook parent company Meta has confirmed to TechCrunch that pictures taken by its new Ray Ban smart glasses and analyzed by onboard Meta AI tools, as well as recordings of all voice commands given to the glasses (unless you opt out), will be used by the company to train its AI models. When I first heard "Facebook Ray Ban," my mind jumped to that old FB Messenger scamā -- you know, your old college RA or a friend of a friend's roommate DMing you after three years of silence to hawk 90% off spectacles at a credit card number-scraping website after their account got hacked. But we're here to discuss something a bit more sinister: Meta's "then as farce, again" to the farce of Google Glass, a collab with eyewear brand Ray Ban to produce specs with a little camera in the frame, voice activated and sporting various functions powered by Meta's proprietary AI models. When TechCrunch first inquired about how these images would be stored and used by Meta, the company provided a CIA-style "we can neither confirm nor deny," which strikes me as a bit of a red flag. In a follow up story, Meta confirmed to TechCrunch that any images analyzed by the glasses' onboard "Meta AI" tool are considered fair game for the company to store and train its AI models on. "In locations where multimodal AI is available (currently US and Canada), images and videos shared with Meta AI may be used to improve it per our privacy policy," explained a representative for the company. That makes it sound opt-in, but one of the main selling points of the glasses is their onboard AI capabilities. You're basically strapping a camera to your face with the power to record everything you see, and saying the wrong thing to it could make some of what you recorded the property of a mega corporation with an demonstrated lack of regard for individual privacy. Speaking of the things you say to your weird camera glasses, Meta's privacy policy also outlines that recordings of voice commands given to the tool are stored by Meta and used to train AI models as well, though TechCrunch notes that users can opt out of this when setting up a Meta AI account. But I find myself feeling resentful of these practices less for customers willingly opting in to this exciting new form of surveillance to the tune of $300 a pop, and more on behalf of friends, family, and those randomly passing by such tech pioneersā -- people who will have no idea they're participating in Meta's grand experiments. We already seem far too comfortable filming strangers and sharing it on social media, and now we're inventing new, ever more subtle ways for people to record everyone around them for fun and profit. A pair of Harvard students has already jailbroken Meta's new Ray Bans and empowered them with a search engine that uses facial recognition to produce personal details of anyone the wearer looks atā -- basically doxing on command. It's also already a matter of policy for Meta to train its AI models on all public Facebook and Instagram posts made by Americans, with an opt-out process that requires you to justify your decision to the $1.5 trillion market cap corporation. As for how to opt out of having your likeness used to train AI models without your consent via AI-empowered Ray Bans, some extra scrutiny around people with thick-framed glasses might be in orderā -- I promise I don't have a camera in mine!
[6]
Meta Is Training Its AI on Your Analyzed Ray-Ban Smart Glasses Images, Videos
If you ask Meta AI to analyze an image or video you take with your Ray-Ban Smart Glasses, Meta will feed that content into its AI for training. Meta is hungry for AI training data -- and is using more than just your Instagram and Facebook posts to get it. The company has confirmed that any images or videos you ask its Meta AI about through a pair of its latest Ray-Ban Smart Glasses will be used for future AI model training. "[I]n locations where multimodal AI is available (currently US and Canada), images and videos shared with Meta AI may be used to improve it per our Privacy Policy," Meta policy communications manager Emil Vazquez tells TechCrunch this week. Initially, the company didn't want to reveal whether it was using content captured with the glasses to power its AI models. Now, it's admitting that it will be using any media that's been examined by the AI. Images and videos you take wearing the glasses without summoning Meta's AI, however, are supposed to be safe from being used for training. For some, this use of personal data could pose an inherent privacy concern. Unlike photos posted to social media, photos and videos taken on-device are not necessarily always appropriate or safe to share with the world -- nor are they intended to be. But if you ask Meta AI to check out those private photos, they won't be quite so private anymore. Meta CEO Mark Zuckerberg has previously defended the company's use of user-generated media for AI training, adding that the images and videos aren't really all that valuable, anyway. "I think individual creators or publishers tend to overestimate the value of their specific content in the grand scheme of this," Zuckerberg told The Verge in an interview last month. Meta's Ray-Ban Smart Glasses cost anywhere from $299 to $379 at time of writing. They're intended to help the wearer navigate daily life and stay connected to the internet, with Meta AI there to examine where you are and where you're going along the way. Meta also plans to add real-time translation abilities, too, for select languages. But this promise of "continuous real-time help" comes with the knowledge that anytime that AI-powered help is provided, that data will be sent to Meta for future use. The only way to opt out of this AI training is to not use the glasses' AI features at all -- and avoid being captured by someone else's pair. Meta's policies state that "your interactions with AI features can be used to train AI models" and "you agree that Meta will analyze those images, including facial features, using AI."
[7]
Meta is Probably Training AI on Images Taken by Meta Ray-Bans
Facebook parent company Meta last week added new AI features to its camera-equipped Ray-Ban Meta Glasses. You can use the camera feature on the glasses to get information about what's around you and to remember things like where you parked. There's also now support for video for AI purposes, for "continuous real-time help." With all of these new features that involve the camera continually viewing what's around the wearer, there are new questions about what Meta is doing with that data. TechCrunch specifically asked Meta if it was using the images collected by the Meta Glasses to train AI models, and Meta declined to say. "We're not publicly discussing that," Anuj Kumar told TechCrunch. Kumar is a senior director that works on AI wearables. "That's not something we typically share externally," another spokesperson said. When asked for clarification on whether images are being used to train AI, the spokesperson said "we're not saying either way." TechCrunch doesn't come out and say it, but if the answer is not a clear and definitive "no," it's likely that Meta does indeed plan to use images captured by the Meta Glasses to train Meta AI. If that wasn't the case, it doesn't seem like there would be a reason for Meta to be ambiguous about answering, especially with all of the public commentary on the methods and data that companies use for training. Meta does train its AI on publicly posted Instagram and Facebook images and stories, which it considers publicly available data. But data collected from the Meta Ray-Ban Glasses that's specifically for interacting with AI in private isn't the same as a publicly posted Instagram image, and it's concerning. As TechCrunch notes, the new AI features for the Meta Glasses are going to be capturing a lot of passive images to feed to AI to answer questions about the wearer's surroundings. Asking the Meta Glasses for help picking an outfit, for example, will see dozens of images of the inside of the wearer's home captured, with those images uploaded to the cloud. The Meta Glasses have always been used for images and video, but in an active way. You generally know when you're capturing a photo or video because it's for the express purpose of uploading to social media or saving a memory, as with any camera. With AI, though, you aren't keeping those images because they're being collected for the express purpose of interacting with the AI assistant. Meta is definitively not confirming what happens to images from the Meta Glasses that are uploaded to its cloud servers for AI use, and that's something Meta Glasses owners should be aware of. Using these new AI features could result in Meta collecting hundreds of private photos that wearers had no intention or awareness of sharing. If Meta is in fact not using the Meta Glasses this way, it should explicitly state that so customers can be aware of exactly what's being shared with Meta and what that is being used for.
[8]
Meta won't say whether it trains AI on smart glasses photos
Meta's AI-powered Ray-Bans have a discreet camera on the front, for taking photos not just when you ask them to, but also when their AI features trigger it with certain keywords such as "look." That means the smart glasses collect a ton of photos, both deliberately taken and otherwise. But the company won't commit to keeping these images private. We asked Meta if it plans to train AI models on the images from Ray-Ban Meta's users, as it does on images from public social media accounts. The company wouldn't say. "We're not publicly discussing that," said Anuj Kumar, a senior director working on AI wearables at Meta, in a video interview with TechCrunch on Monday. "That's not something we typically share externally," said Meta spokesperson Mimi Huggins, who was also on the video call. When TechCrunch asked for clarification on whether Meta is training on these images, Huggins responded, "we're not saying either way." Part of the reason this is especially concerning is because of the Ray-Ban Meta's new AI feature, which will take lots of these passive photos. Last week, TechCrunch reported that Meta plans to launch a new real-time video feature for Ray-Ban Meta. When activated by certain keywords, the smart glasses will stream a series of images (essentially, live video) into a multimodal AI model, allowing it to answer questions about your surroundings in a low-latency, natural way. That's a lot of images, and they're photos a Ray-Ban Meta user might not consciously be aware that they're taking. Say you asked the smart glasses to scan the contents of your closet to help you pick out an outfit. The glasses are effectively taking dozens of photos of your room and everything in it, and uploading them all to an AI model in the cloud. What happens to those photos after that? Meta won't say. Wearing the Ray-Ban Meta glasses also means you're wearing a camera on your face. As we found out with Google Glass, that's not something other people are universally comfortable with, to put it lightly. So you'd think it's a no-brainer for the company that's doing it to say, "Hey! All your photos and videos from your face cameras will be totally private, and siloed to your face camera." But that's not what Meta is doing here. Meta has already declared that it is training its AI models on every American's public Instagram and Facebook posts. The company has decided all of that is "publicly available data," and we might just have to accept that. It and other tech companies have adopted a highly expansive definition of what is publicly available for them to train AI on, and what isn't. However, surely the world you look at through its smart glasses is not "publicly available." While we can't say for sure that Meta is training AI models on your Ray-Ban Meta camera footage, the company simply wouldn't say for sure that it isn't. Other AI model providers have more clear-cut rules about training on user data. Anthropic says it never trains on a customer's inputs into, or outputs from, one of their AI models. OpenAI also says it never trains on user inputs or outputs through its API. We've reached out to Meta for further clarification here, and will update the story if they get back to us.
[9]
Meta won't answer whether it's smart glasses are using the images you record to train its AI
Meta has very pointedly dodged the question of whether it's camera-equipped smart glasses are using user-generated images to train the company's artificial intelligence models. Anuj Kumar, a senior director at Meta, was asked point blank by TechCrunch during an interview whether or not pictures taken by the Ray-Ban Meta smart glasses contributed to training the company's AI. "We're not publicly discussing that," Kumar told the publication. With Meta spokesperson Mimi Huggins adding that: "That's not something we typically share externally." This comes just a week after Meta announced a substantial AI-related updated to its smart glasses at Connect 2024. During the event, Mark Zuckerberg claimed the glasses will soon be capable of multimodal video meaning giving "real-time advice" based on what it sees through the on-board cameras. The on-stage example showed someone getting ready for a party with the AI helping them pick out appropriate pieces for their outfit. Of course, in capturing all this extra video the company is gaining access to a huge amount of imagery that could serve as potential training data -- whether the user realizes it or not. Meta's spokespeople may not have given a clear answer during the interview, but the company's terms and conditions seem to be a little more straightforward. Under a subheading entitled "the permissions you give us", users agree that any content they create, share, post or upload in on or in connection with a Meta product gives the company: "permission to store, copy, and share them with others (again, consistent with your privacy settings), such as Meta Company Products, or service providers that support those products and services." Meanwhile, Meta's AI terms of service also states the following when it comes to image processing: Depending on where you are located, you may have the option to share images with AIs. Once shared, you agree that Meta will analyze those images, including facial features, using AI. This processing allows us to offer innovative new features, including the ability to summarize image contents, modify images, and generate new content based on the image. You further agree that you will not upload images to Meta AI that you know to contain individuals that reside in Illinois or Texas, unless you are their legally authorized representative and consent on their behalf. Which seems pretty clear-cut that the company is able to use the images you agree to share for the benefit of product development. And, since you have to agree to the terms and conditions in order to use the glasses in the first place, it would seem you're granting implied consent for that to take place. Practically speaking, anyone taking part in any form of internet-connected image sharing should operate under the principle those images aren't likely to be private. It'll be a case of each user weighing up how much value they get from a product against the privacy sacrifices they'll have to make to use it. And in the case of the Ray-Ban Meta smart glasses, there's no doubt Meta has put forward a compelling product. In fact, my colleague Jason England maintains they're his favorite gadget of the year.
[10]
The captures you make with the Ray-Ban Meta "could" be used to train their AI - Softonic
For Meta, this is made very clear in the interface of the glasses We could sense it, but for the moment, there was no official confirmation. Could it be possible that Meta is training its own artificial intelligence with the photos and videos users take with their Ray-Ban Meta? The answer seems to be a yes, but with nuances. Basically, any image you share with Meta AI could be used to train its AI. In an email to TechCrunch, Meta's policy communications manager, Emil VƔzquez, wrote that "in places where multimodal AI is available (currently the United States and Canada), the images and videos shared with Meta AI could be used to improve it." A company spokesperson also clarified to the media that the photos and videos captured with the Ray-Ban Meta are not used to train the AI directly. Of course, this changes completely when using Meta AI with them. If, for example, we ask Meta AI to analyze any of our captures, the policies change: now, the company would have free rein to use your content to train its AI. This is a concerning situation. At first glance, users of Ray-Ban Meta may not understand that by using Meta AI, they are handing over their data on a silver platter without being aware of it. For Meta, this is not the case. The company spokespersons clarify that this is more than clear in the user interface of the glasses. Meta had already made it public that the public content hosted on Facebook and Instagram was used to train its Llama models. A situation that is already concerning is worsened by confirming that anything the user analyzes through their glasses with Meta AI can be used without their explicit permission.
[11]
Meta Might Be Using Photos Taken on Ray-Ban Smart Glasses to Train AI
When asked this week if Meta is using photos taken on Ray-Ban smart glasses to train its AI, company representatives refused to deny or confirm. In an interview with Tech Crunch on Monday, Meta was asked if it plans to train its AI models on the images taken by customers' Ray-Ban Meta Smart Glasses the same way it uses photos from public Instagram and Facebook accounts. "We're not publicly discussing that," Anuj Kumar, a senior director working on AI wearables at Meta, tells Tech Crunch in a video interview. "That's not something we typically share externally," adds Meta spokesperson Mimi Huggins. When Tech Crunch pushed for further clarification Hussing replied, "We're not saying either way." For those not familiar with Ray-Ban Meta smart glasses, they can shoot both 1080 30P video and 12-megapixel stills. Furthermore, a real-time video update is coming to the glasses which will stream images into a multimodal AI model so the glasses can answer questions about what the user is looking at. This will mean that the smart glasses will be gathering even more images than before and as Tech Crunch notes, the user might not be completely aware the device is recording imagery. In private moments, the user might be asking the glasses about an object they're looking at -- unaware the images are being saved in a cloud. The obvious problem with smart glasses has always been the fact that they could be used as a clandestine recording device. In his review of the Meta Ray-Bans, PetaPixel's Chris Niccolls wrote. "There is an LED light that flashes whilst recording stills and video. To be fair, it is quite bright but I still feel that most people won't notice. If the LED is obscured the glasses will not record but a video can be started and will continue to record if the LED is covered afterwards. I'll leave it up to you the reader to determine how and when you feel comfortable shooting." While Meta and many other AI companies argue that imagery on the open web and on social media is "publicly available", it is far more difficult to argue that imagery taken on a private person's smart glasses is public. The fact that Meta won't say that it isn't using imagery from smart glasses for AI training purposes is alarming given the kind of private imagery the devices are capable of collecting.
[12]
Videos Takes With Ray-Ban Meta Smart Glasses Might Not Remain Private
Ray-Ban Meta smart glasses can now be activated with "Hey Meta" command Meta is reportedly staying quiet on whether it is collecting video and image data from its artificial intelligence (AI) wearable device Ray-Ban Meta smart glasses to train its large language models (LLMs). The company announced a new real-time video feature for the device using which users can ask the AI to answer queries and ask for suggestions based on their surroundings. However, there is no clarity on what happens to this data once the AI responds to the query. The feature in question is the real-time video capability that allows Meta AI to "look" at the users' surroundings and process that visual information to answer any query a user may have. For instance, a user can ask it to identify a famous landmark, show it the closet and ask for wardrobe suggestions, or even ask for recipes based on the ingredients in the refrigerator. However, each of these functionalities requires the Ray-Ban Meta smart glasses to take passive videos and images of the surroundings to understand the context. In normal circumstances, once the response has been generated and the user has ended the conversation, the data should be left in private servers if not instantly deleted. This is because a lot of the data might be private information about the user's home, and other belongings. But Meta is reportedly not stating this. On being asked whether the company is storing this data and training native AI models on this, a Meta spokesperson told TechCrunch that the company is not publicly discussing the matter. Another spokesperson reportedly highlighted that this information is not being shared externally and added that "we're not saying either way." The company's refusal to clearly state what happens with the user data is concerning given the private, and potentially sensitive nature of the data the smart glasses can capture. While Meta has already confirmed training its AI models on public user data of its US-based users on Facebook and Instagram, the data from the Ray-Ban Meta smart glasses are not public. Gadgets 360 has reached out to Meta for a comment. We will update the story once we receive a statement from the company.
[13]
Meta won't say if Ray-Ban Meta photos are used to train AI
The Ray-Ban Meta glasses allow users to take photos and videos, but Meta AI can also harness your camera to give you answers about the world around you. More recently, the company announced the ability to passively stream video and get continuous assistance from Meta AI. But Meta's reluctance to reveal whether it trains its AI models on your photos means you should really think twice about taking photos and videos and using this new video streaming feature.
[14]
Meta's Privacy Riddle: Where Do Ray-Ban Smart Glasses Videos Go?
The glasses sport enhanced video abilities allowing users to interact with Meta AI about their surroundings. The users will interact live via the glasses by deploying the "Hey Meta" command. For example, a user might ask the AI details for about the historical site they are visiting while sightseeing. The consumers can ask questions about clothing choices, or even on what to cook based on the ingredients available. However, this particular functionality requires constant video capture. Ergo, questions arise regarding privacy as well as data use.
[15]
Mark Zuckerberg's Meta Can Use Ray-Ban Images You Click To Train AI: There's No Opt Out Either - Meta Platforms (NASDAQ:META)
Meta Platforms Inc. META has confirmed that it may use any image shared with Meta AI for training purposes, including those captured on Ray-Ban Meta smart glasses. What Happened: Meta initially did not disclose whether it trains its AI on photos and videos taken on the Ray-Ban Meta smart glasses. However, now the company has revealed new information, reported TechCrunch. Emil Vazquez, Meta policy communications manager, stated, "In locations where multimodal AI is available (currently U.S. and Canada), images and videos shared with Meta AI may be used to improve it per our Privacy Policy." See Also: Elon Musk Responds After Y Combinator's Paul Graham Says Twitter Name Change 'Was A Waste Of Time:' 'You Know Nothing' There's no way for users to opt out of this, except if they refrain from using Meta's multimodal AI features. In its privacy policies, Meta has also stated that it automatically retains transcriptions of voice conversations with Ray-Ban Meta to train future AI models. However, users can decline to have their actual voice recordings used for this purpose by selecting their preference when first logging into the Ray-Ban Meta app. Meta did not immediately respond to Benzinga's request for comment. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: Meta has been making significant strides in AI and VR technology, with analysts predicting growth that could push it toward an Apple-like valuation. However, this development comes amidst concerns over privacy and data usage. Meta paid $1.4 billion to the state of Texas to settle a court case related to its use of facial recognition software. Previously, it was reported that Meta uses all publicly shared content on the company's social media platforms like Instagram and Facebook to train its Llama AI models. In June, the tech giant came under fire for doing the same in Norway. Meanwhile, last month, it was reported that Zuckerberg had increased his wealth by an astonishing $51 billion this year, raising his total net worth to $179 billion. At the time of writing, his net worth had reached $203 billion, making him the third wealthiest person in the world after Elon Musk and Jeff Bezos. Photo courtesy: Ray-Ban Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: OpenAI's SearchGPT Set To Disrupt Google's Search Empire As Former Engineer Sounds Alarm On Deteriorating User Experience Amid Rising Competition Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
Meta confirms that images and videos analyzed by AI features in Ray-Ban Meta smart glasses may be used to train its AI models, raising privacy concerns for users and bystanders.
Meta, the parent company of Facebook, has recently confirmed that images and videos captured by its new Ray-Ban Meta smart glasses can be used to train its artificial intelligence models. This revelation has sparked concerns about privacy and data usage among users and privacy advocates alike [1][2].
The Ray-Ban Meta smart glasses, a collaboration between Meta and Ray-Ban, come equipped with a camera capable of capturing photos and videos. The glasses also feature an AI assistant that can be activated using voice commands. Users can prompt the AI to analyze their surroundings by using keywords such as "look" followed by a question [1].
Meta's policy communications manager, Emil Vazquez, clarified that "in locations where multimodal AI is available (currently US and Canada), images and videos shared with Meta AI may be used to improve it per our Privacy Policy" [2]. This means that any image or video analyzed by the AI assistant becomes eligible for use in training Meta's AI models [3].
The revelation raises several privacy concerns:
Currently, the only way to opt out of having images used for AI training is to avoid using the AI analysis features altogether. Meta states that this policy is clearly outlined in the user interface and terms of service [3][4].
This development is part of a larger trend in AI development, where user-generated content is increasingly used to train AI models. Meta already trains its Llama AI models on public posts from American users on Instagram and Facebook [2].
As Meta continues to develop AI-powered wearables, questions arise about the balance between technological advancement and privacy protection. The ease of activating AI features with natural speech may lead to unintended data sharing [5].
Privacy advocates and industry observers recommend that users exercise caution when using AI-powered devices like the Ray-Ban Meta glasses. They suggest being mindful of what images are shared with the AI assistant and understanding the potential implications of data usage [4][5].
As the AI industry evolves, there is a growing need for clearer communication about data usage policies and more comprehensive opt-out options that don't compromise device functionality [5].
Reference
Two Harvard students created a system using Meta's smart glasses and AI to instantly reveal personal information about strangers, raising significant privacy concerns.
18 Sources
Meta's Ray-Ban smart glasses receive a significant AI update, introducing multimodal features that enhance user interaction and functionality, potentially revolutionizing the smart glasses market.
3 Sources
Ray-Ban Meta smart glasses are outselling traditional Ray-Bans in many stores, with new AI features rolling out globally. The glasses' success has led to an extended partnership between Meta and EssilorLuxottica.
4 Sources
Meta has announced a range of new AI-driven features for its Ray-Ban smart glasses, including live translation, multi-modal AI, and enhanced video capabilities. These updates aim to make the glasses more versatile and useful in everyday life.
16 Sources
Meta's Ray-Ban smart glasses combine AI capabilities with stylish design, offering features like hands-free photography, AI assistance, and audio playback. While current models have limitations, future versions promise more advanced AR functionality.
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
Ā© 2024 TheOutpost.AI All rights reserved