Curated by THEOUTPOST
On Wed, 30 Apr, 12:05 AM UTC
9 Sources
[1]
WhatsApp Is Gambling That It Can Add AI Features Without Compromising Privacy
WhatsApp's AI tools will use a new "Private Processing" system designed to allow cloud access without letting Meta or anyone else see end-to-end encrypted chats. But experts still see risks. The end-to-end encrypted communication app WhatsApp, used by roughly 3 billion people around the world, will roll out cloud-based AI capabilities in the coming weeks that are designed to preserve WhatsApp's defining security and privacy guarantees while offering users access to message summarization and composition tools. Meta has been incorporating generative AI features across its services that are built on its open source large language model, Llama. And WhatsApp already incorporates a light blue circle that gives users access to the Meta AI assistant. But many users have balked at this addition, given that interactions with the AI assistant aren't shielded from Meta the way end-to-end encrtyped WhatsApp chats are. The new feature, dubbed Private Processing, is meant to address these concerns with what the company says is a carefully architected and purpose-built platform devoted to processing data for AI tasks without the information being accessible to Meta, WhatsApp, or any other party. While initial reviews by researchers of the scheme's integrity have been positive, some note that the move toward AI features could ultimately put WhatsApp on a slippery slope. "WhatsApp is targeted and looked at by lots of different researchers and threat actors. That means internally it has a well understood threat model," says Meta security engineering director Chris Rohlf. "There's also an existing set of privacy expectations from users, so this wasn't just about managing the expansion of that threat model and making sure the expectations for privacy and security were met -- it was about careful consideration of the user experience and making this opt-in." End-to-end encrypted communications are only accessible to the sender and receiver, or the people in a group chat. The service provider, in this case WhatsApp and its parent company Meta, is boxed out by design and can't access users' messages or calls. This setup is incompatible with typical generative AI platforms that run large language models on cloud servers and need access to users' requests and data for processing. The goal of Private Processing is to create an alternate framework through which the privacy and security guarantees of end-to-end encrypted communication can be upheld while incorporating AI. Users opt into using WhatsApp's AI features, and they can also prevent people they're chatting with from using the AI features in shared communications by turning on a new WhatsApp control known as "Advanced Chat Privacy." "When the setting is on, you can block others from exporting chats, auto-downloading media to their phone, and using messages for AI features," WhatsApp wrote in a blog post last week. Like disappearing messages, anyone in a chat can turn Advanced Chat Privacy on and off -- which is recorded for all to see -- so participants just need to be mindful of any adjustments. Private Processing is built with special hardware that isolates sensitive data in a "Trusted Execution Environment," a siloed, locked-down region of a processor. The system is built to process and retain data for the minimum amount of time possible and is designed grind to a halt and send alerts if it detects any tampering or adjustments. WhatsApp is already inviting third-party audits of different components of the system and will make it part of the Meta bug bounty program to encourage the security community to submit information about flaws and potential vulnerabilities. Meta also says that, ultimately, it plans to make the components of Private Processing open source, both for expanded verification of its security and privacy guarantees and to make it easier for others to build similar services.
[2]
WhatsApp is working on private AI chats in the cloud
Wes Davis is a weekend editor who covers the latest in tech and entertainment. He has written news, reviews, and more as a tech journalist since 2020. Meta announced a new WhatsApp feature it says is a private way to interact with Meta AI. Called "Private Processing," the feature is totally optional, launches in the "coming weeks," and neither Meta, WhatsApp, nor third-party companies will be able to see interactions that use it, according to the release. Meta says users can "direct AI to process their requests," like for AI chat summaries, using Private Processing. If they do, the system won't "retain access to user messages once the session is complete" so that a potential attacker can't access them after the fact, according to the company. Meta wants to prevent attackers from targeting users without first compromising the whole system. It also wants to ensure that independent third parties are "able to audit the behavior of Private Processing to independently verify our privacy and security guarantees." Private Processing is now part of Meta's bug bounty program, and the company promises to release a "detailed security engineering design paper" as it gets closer to launching the system. The system Meta describes sounds similar to Apple's Private Cloud Compute (PCC). Like Apple, Meta says it will relay Private Processing requests through a third-party provider for OHTTP, a protocol that obscures users' IP addresses. But as Wired notes, one difference is that all of WhatsApp's AI requests are handled on Meta's servers and users have to initiate Private Processing. On the other hand, Apple defaults to on-device AI processing, but defaults to PCC when requests go to its servers.
[3]
WhatsApp unveils 'Private Processing' for cloud-based AI features
WhatsApp has announced the introduction of 'Private Processing,' a new technology that enables users to utilize advanced AI features by offloading tasks to privacy-preserving cloud servers. This is required to utilize AI functionalities such as message summarization and writing suggestions on WhatsApp, which are too demanding for on-device hardware. The new feature will be entirely opt-in and not enabled by default, giving users complete control over how and when they choose to utilize it. Private Processing is not immediately available to WhatsApp users but will gradually be rolled out in the upcoming weeks. For those who opt to use Private Processing, the system performs an anonymous authentication via the user's WhatsApp client to ensure the user's validity. Next, the app fetches public HPKE encryption keys from a third-party CDN so that Meta cannot trace requests back to specific users, maintaining full anonymity. The user's device initiates a connection to a Meta gateway through a third-party relay, hiding their real IP address. It establishes a remote attestation (RA) + TLS session between the user's device and Meta's Trusted Execution Environment (TEE). Next, the user's device sends an end-to-end encrypted request for AI data processing using an ephemeral encryption key, which is processed inside a Confidential Virtual Machine (CVM) isolated from Meta. Meta claims the processing environment is stateless, and all messages are deleted after they're processed, leaving only "non-sensitive" logs behind. Finally, the AI-generated response is encrypted with a unique key only known to the device and processing CVM and is sent back over the secure session for decryption on the user's device. WhatsApp has promised to share the CVM binary and some source code to allow external validation, while a detailed white paper on the secure design of Private Processing will also be published soon. Despite the data security and privacy safeguarding assurances offered by Meta, there are always concerns when sensitive data leaves devices for processing on the cloud. Ultimately, offloading AI tasks to cloud servers always comes with an inherent risk, even if implementing robust end-to-end encryption is utilized. Users who are uncomfortable with how Private Processing works should leave it disabled. For those who find advanced AI features useful but still need to stay in control over when data is allowed to leave their device, WhatsApp's recently launched 'Advanced Chat Privacy' feature would be the ideal solution.
[4]
WhatsApp Launches Private Processing to Enable AI Features While Protecting Message Privacy
Popular messaging app WhatsApp on Tuesday unveiled a new technology called Private Processing to enable artificial intelligence (AI) capabilities in a privacy-preserving manner. "Private Processing will allow users to leverage powerful optional AI features - like summarizing unread messages or editing help - while preserving WhatsApp's core privacy promise," the Meta-owned service said in a statement shared with The Hacker News. With the introduction of the latest feature, the idea is to facilitate the use of AI features while still keeping users' messages private. It's expected to be made available in the coming weeks. The capability, in a nutshell, allows users to initiate a request to process messages using AI within a secure environment called the confidential virtual machine (CVM) such that no other party, including Meta and WhatsApp, can access them. Confidential processing is one of the three tenets that underpin the feature, the others being - The system is designed as follows: Private Processing obtains anonymous credentials to verify that future requests are coming from a legitimate WhatsApp client and then proceeds to establish an Oblivious HTTP (OHTTP) connection between the user's device and a Meta gateway via a third-party relay that also hides the source IP address from Meta and WhatsApp. A secure application session is subsequently established between the user's device and the Trusted Execution Environment (TEE), following which an encrypted request is made to the Private Processing system using an ephemeral key. This also means that the request cannot be decrypted by anyone other than the TEE or the user's device from which the request (e.g., message summarization) is sent. The data is processed in CVM and the results are sent back to the user's device in an encrypted format using a key that's accessible only on the device and the Private Processing server. Meta has also acknowledged the weak links in the system that could expose it to potential attacks via compromised insiders, supply chain risks, and malicious end users, but emphasised it has adopted a defense-in-depth approach to minimize the attack surface. Furthermore, the company has pledged to publish a third-party log of CVM binary digests and CVM binary images to help external researchers "analyze, replicate, and report instances where they believe logs could leak user data." The development comes as Meta released a dedicated Meta AI app built with Llama 4 that comes with a "social" Discover feed to share and explore prompts and even remix them. Private Processing, in some ways, mirrors Apple's approach to confidential AI processing called Private Cloud Compute (PCC), which also routes PCC requests through an OHTTP relay and processes them in a sandboxed environment. Late last year, the iPhone maker publicly made available its PCC Virtual Research Environment (VRE) to allow the research community to inspect and verify the privacy and security guarantees of the system.
[5]
Meta has a plan to bring AI to WhatsApp chats without breaking privacy
The company previewed new "Private Processing" technology for the messaging app. As Meta's first-ever generative AI conference , the company is also previewing a significant update on its plans to bring AI features to WhatsApp chats. Buried in its LlamaCon updates, the company that it's working on something called "Private Processing," which will allow users to take advantage of generative AI capabilities within WhatsApp without eroding its privacy features. According to Meta, Private Processing is an "optional capability" that will enable people to "leverage AI capabilities for things like summarizing unread messages or refining them, while keeping messages private." WhatsApp, of course, is known for its strong privacy protections and end-to-end . That would seem incompatible with cloud-based AI features like Meta AI. But Private Processing will essentially allow Meta to do both. Meta has shared more details about how it will accomplish this over on its but, as Wired it's a similar model as Apple's (which allows the iPhone maker to implement Apple AI without sending all your data to the cloud). Here's how Meta describes its approach. The company seems well-aware such a plan will likely be met with skepticism. WhatsApp is regularly targeted by bad actors as it is. To address inevitable concerns from the security community, the company says it will allow security researchers and others to audit Private Processing, and will make the technology part of its bug bounty program that rewards people who find security vulnerabilities in its services. It's not clear when generative AI features may actually be available in WhatsApp chats -- the company describes its announcement today as merely a "first look" at the technology -- but it does note that Private Processing and "similar infrastructure" could have use cases beyond its messaging app.
[6]
WhatsApp borrowing Apple's Private Cloud Compute approach to AI privacy
The company has announced that it will use tech it calls Private Processing, which appears to exactly replicate Apple's Private Cloud Compute ... Apple takes a two-stage approach to ensuring user privacy for Apple Intelligence features: Any personal data sent to PCC uses end-to-end encryption, so that not even Apple has access to it - but the company goes further than this. It uses an approach known as 'stateless computation,' which means that once processing is complete, the personal data is completely wiped from the system. The moment processing is complete, it's as if it never existed in the first place. Additionally, Apple allows anyone to check the security of the approach for themselves, meaning security researchers will be able to verify the company's claims. Concerns were raised when Meta added an AI chatbot to WhatsApp, with no option to remove it. Some users are seeing a new Meta AI logo in the chats screen, while others have an 'Ask Meta AI or Search' prompt in the search bar. There is currently no way to remove either. Many users are expressing their frustration at what they see as an unwanted intrusion, with Guardian columnist Polly Hudson among those to object. She likened it to the time Apple annoyed everyone by adding a U2 album to their devices. One of the capabilities of the WhatsApp AI is to summarize messages, which obviously entails the ability to read them. The company has now laid out how it will ensure this is done in a privacy-protecting way. We're excited to share an initial overview of Private Processing, a new technology we've built to support people's needs and aspirations to leverage AI in a secure and privacy-preserving way. This confidential computing infrastructure, built on top of a Trusted Execution Environment (TEE), will make it possible for people to direct AI to process their requests -- like summarizing unread WhatsApp threads or getting writing suggestions -- in our secure and private cloud environment. In other words, Private Processing will allow users to leverage powerful AI features, while preserving WhatsApp's core privacy promise, ensuring no one except you and the people you're talking to can access or share your personal messages, not even Meta or WhatsApp. Like PCC, Private Processing will use stateless processing. Stateless processing and forward security: Private Processing must not retain access to user messages once the session is complete to ensure that the attacker can not gain access to historical requests or responses. Finally, Meta will also follow Apple's lead in allowing anyone to verify its claims. Users and security researchers must be able to audit the behavior of Private Processing to independently verify our privacy and security guarantees. You can find more details on the company's engineering blog. While there are always grumbles when anyone copies Apple (more so than when Apple copies others), this is an area where nobody should have any complaints. Meta appears to be precisely replicating all of Apple's safeguards, and that's to be entirely commended. All tech giants should do the same.
[7]
Meta Is Building Private Processing on WhatsApp for Secure AI Use
Private Processing uses cryptographic techniques like Oblivious HTTP Meta is working on a new technology called Private Processing for WhatsApp that allows users to access artificial intelligence (AI) tools in a private and secure environment. On Tuesday, the Menlo Park-based tech giant shared a first look at the technology and how it is ensuring both data security and transparency at the user level. Notably, the company claims that messages shared with the AI and the responses generated within this cloud-based environment cannot be accessed by anyone (including Meta and WhatsApp) apart from the user and any other person they're talking to. The company detailed the vision behind Private Processing and the layered infrastructure that will be used to create this secure environment. It is currently under development, and the company stated that it will soon publish details about some of the components that went into building this technology to enable independent research in this area. Private Processing is aimed at letting users interact with Meta AI and using features, such as summarising unread chats and generating writing suggestions, without compromising their privacy. This also solves the tech giant's issue of integrating AI on WhatsApp without raising user concerns about Meta storing user data on its servers and breaking the platform's end-to-end encryption for messages and other features. Meta said it is building Private Processing on a trusted execution environment (TEE) infrastructure, a secure part of the cloud that processes data without revealing it to anyone else. For instance, if a user sends a request to the AI to summarise a group's messages, only the user's device and the secure processing environment will be able to access this data. Additionally, once the summary has been processed, the information is deleted from the servers. Coming to technical details, Meta said Private Processing uses advanced cryptographic techniques such as Oblivious HTTP and Remote Attestation to ensure that the user's identity and data remain hidden. Each request is routed through third-party relays and verified against public ledgers to ensure only approved code is used. This confidential processing also means that neither Meta nor WhatsApp can access the data entering the cloud environment, either during transit to Private Processing or while processing the data, according to the company. Meta has also added enforceable guarantees so that if there is an attempt to modify the system, it will automatically trigger system failure. The company also plans to let users and security researchers audit the secure environment and verify the guarantees made. But these protections only ensure security from potential internal issues. Meta says it is also building safety layers for external threats. As per the post, cyberattackers will not be able to target individual users without compromising the entire system. Further, since the data is only stored temporarily on the server, attackers will not be able to retrieve older data even if they can pull off a server-wide attack. In the coming weeks, Meta said it will release more details, including technical papers and bug bounty expansions. While AI-powered features like message summarisation will be part of Private Processing's initial offering, the company plans to add several other use cases in the future. For more information, readers can find the entire blog post here.
[8]
WhatsApp Private Processing: Meta's AI Push Could Undermine User Privacy
WhatsApp is arguably one of the most secure messaging apps, second only to Signal, that offers end-to-end encryption by default. It means that no one, not even Meta can read your private messages, except for the sender and receiver. With over 3 billion users, WhatsApp's security and privacy guarantees largely stem from the strong implementation of end-to-end (E2E) encryption. However, Meta is developing "Private Processing" to enable AI features on WhatsApp, which could undermine its widely trusted E2E encryption. Until now, WhatsApp has largely remained secure, and Meta has even taken spyware companies like NSO to court to protect user privacy. But this time, Meta seems to be diluting WhatsApp's integrity through its own actions, only to bring AI features to WhatsApp. Meta has shared an early look at Private Processing, a new technology designed to bring AI features to WhatsApp while claiming to maintain core privacy. It will allow people to use AI features on WhatsApp like summarizing unread messages or generating writing suggestions. However, in order to process such AI requests, the message content will be sent to Meta's cloud servers, which raises concerns about WhatsApp's end-to-end encryption. To address privacy concerns, Meta says the communication between the user's device and the Private Processing infrastructure is also end-to-end encrypted. In addition, Meta claims that no one including Meta itself and WhatsApp can decrypt these messages, as they are protected using an "ephemeral key". This key is only known to the user's device and the selected TEE (Trusted Execution Environment) on the cloud server. That said, by introducing Private Processing on WhatsApp, Meta is effectively creating a third layer of access -- beyond the sender and receiver -- into the messaging chain. In a way, this breaks the traditional end-to-end encryption model of WhatsApp, allowing another intermediary to access personal messages, on request. Meta says the Trusted Execution Environment (TEE) is built on top of "confidential computing infrastructure" to uphold user privacy and security. Now, let's understand how TEE is designed for WhatsApp Private Processing and whether it inspires confidence. The Trusted Execution Environment (TEE) on Meta's cloud servers is designed to process WhatsApp AI requests privately. It's a highly private environment where the system is completely isolated from all parties. Its communication with the user's device is end-to-end encrypted, and no one including Meta, WhatsApp, or any third-party actor can access the data -- not in transit or at rest. Next, Meta has disabled remote access to TEEs, allowing further isolation. In addition, the server hardware uses CPU technologies like "confidential virtualization" and secure GPU modes to prevent attacks on the hardware. Following that, TEEs don't save messages after processing them. Basically, it's a "stateless" system where all messages are discarded after data processing. It also means that your messages won't be used for training AI models by Meta. Furthermore, Meta says Private Processing uses anonymous credentials to authenticate users, but can't identify them. In fact, it doesn't pass identifiable information to the system to minimize data leaks. Finally, there are verification systems in place to perform hardware-based attestations and only load a trusted list of software. And to improve transparency, Meta plans to publish the system code so that researchers can verify and audit the code. Not only that, but Meta is expanding its bug bounty program for independent security testing. WhatsApp's Private Processing sounds pretty similar to Apple's Private Cloud Compute. Apple has also developed its own cloud infrastructure to privately process Apple Intelligence features. While Apple has built a strong reputation for protecting user privacy at all costs, the same can't be said for Meta, given its past track record. If you are skeptical of Meta's Private Processing promise for AI features, you can enable "Advanced Chat Privacy" in WhatsApp. This option allows you to "block others from exporting chats, auto-downloading media to their phone, and using messages for AI features." You can enable Advanced Chat Privacy for individual chats and group chats as well. Basically, other users won't be able to use your chats for AI features. While tech companies are racing to integrate AI features into every product, I think messaging apps should remain isolated from cloud-based AI processing. In security, data minimization is one of the fundamental principles, and WhatsApp has followed this core principle for many years, especially by implementing end-to-end encryption. Meta should be cautious in adopting Private Processing in WhatsApp, which adds some AI features but also introduces a new attack vector.
[9]
Meta unveils Private Processing for secure AI tools on WhatsApp
Meta on Tuesday introduced Private Processing, a new optional feature designed to let WhatsApp users process messages using AI in a secure, private cloud environment. The company said this ensures that neither Meta, WhatsApp, nor any third party can access the messages, maintaining end-to-end encryption. The announcement highlighted how AI has transformed technology interactions by automating tasks and analyzing data. However, traditional AI processing, which relies on server-based large language models, often requires providers to see user requests. This can challenge privacy, especially for sensitive messages. Meta stated that Private Processing tackles this issue by supporting AI functions, such as summarizing messages or offering writing assistance, while upholding WhatsApp's commitment to privacy. Meta outlined three guiding principles for Private Processing: Private Processing operates within a Trusted Execution Environment (TEE), a secure cloud setup that prevents unauthorized access to data. The process includes: Meta emphasized that Private Processing meets strict requirements: Meta developed a threat model to identify risks, focusing on: To counter these, Meta implemented: Meta plans to share Private Processing components publicly, including a security design white paper, and expand its Bug Bounty program to cover this feature. The company will release CVM binary images and source code for attestation verification to support independent research. An in-app log will show users their Private Processing requests and session details. Meta expects Private Processing to launch in the coming weeks, initially supporting message summarization and writing suggestions. The company believes this infrastructure could support other AI use cases in the future. Meta welcomes feedback through its Bug Bounty program and will continue sharing updates transparently.
Share
Share
Copy Link
WhatsApp introduces 'Private Processing', a new technology designed to enable AI features in end-to-end encrypted chats without compromising user privacy. The system aims to process data securely in the cloud while maintaining WhatsApp's core privacy promises.
WhatsApp, the end-to-end encrypted messaging app used by approximately 3 billion people worldwide, is set to introduce a groundbreaking feature called 'Private Processing' in the coming weeks 1. This new technology aims to integrate cloud-based AI capabilities while maintaining the app's stringent privacy and security standards 2.
Private Processing utilizes a carefully architected platform designed to process data for AI tasks without allowing access to Meta, WhatsApp, or any third party 1. The system employs several key components:
The process involves anonymous authentication, fetching public encryption keys, establishing secure connections, and processing requests in the CVM before sending encrypted responses back to the user's device 34.
WhatsApp has implemented several measures to ensure user privacy and security:
Despite the robust security measures, some experts have expressed concerns:
To address potential concerns, Meta has committed to:
WhatsApp's Private Processing bears similarities to Apple's Private Cloud Compute (PCC) system, which also uses OHTTP relays and sandboxed environments for confidential AI processing 4. However, WhatsApp's approach differs in that all AI requests are handled on Meta's servers, whereas Apple defaults to on-device AI processing when possible 2.
The introduction of Private Processing could have far-reaching implications for the integration of AI in secure messaging platforms. As Meta continues to expand its AI capabilities across its services, this technology may serve as a blueprint for maintaining user privacy while leveraging the power of cloud-based AI 5.
Reference
[2]
[3]
[4]
WhatsApp's introduction of a non-removable Meta AI chatbot in the EU has sparked regulatory scrutiny and user discontent, raising questions about privacy, consumer choice, and the changing nature of messaging platforms.
3 Sources
3 Sources
Meta is developing a 'Chat Memory' feature for its AI assistant on WhatsApp, allowing it to remember user preferences and provide more personalized responses, while maintaining user privacy and control.
4 Sources
4 Sources
WhatsApp is testing a significant redesign that brings AI features to the forefront, including a dedicated AI tab and the ability to create personalized AI characters within the app.
7 Sources
7 Sources
WhatsApp is reportedly working on new AI features, including an AI Rewrite tool for text messages and a two-way voice chat feature for Meta AI, enhancing user interaction within the app.
2 Sources
2 Sources
WhatsApp has launched a new widget for Meta AI, making the AI assistant more accessible to users. This move reflects Meta's strategy to integrate AI deeply into its messaging platform.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved