3 Sources
3 Sources
[1]
Signal creator Moxie Marlinspike wants to do for AI what he did for messaging
Moxie Marlinspike -- the pseudonym of an engineer who set a new standard for private messaging with the creation of the Signal Messenger -- is now aiming to revolutionize AI chatbots in a similar way. His latest brainchild is Confer, an open source AI assistant that provides strong assurances that user data is unreadable to the platform operator, hackers, law enforcement, or any other party other than account holders. The service -- including its large language models and back-end components -- runs entirely on open source software that users can cryptographically verify is in place. Data and conversations originating from users and the resulting responses from the LLMs are encrypted in a trusted execution environment (TEE) that prevents even server administrators from peeking at or tampering with them. Conversations are stored by Confer in the same encrypted form, which uses a key that remains securely on users' devices. Like Signal, the under-the-hood workings of Confer are elegant in their design and simplicity. Signal was the first end-user privacy tool that made using it a snap. Prior to that, using PGP email or other options to establish encrypted channels between two users was a cumbersome process that was easy to botch. Signal broke that mold. Key management was no longer a task users had to worry about. Signal was designed to prevent even the platform operators from peering into messages or identifying users' real-world identities. "Inherent data collectors" All major platforms are required to turn over user data to law enforcement or private parties in a lawsuit when either provides a valid subpoena. Even when users opt out of having their data stored long term, parties to a lawsuit can compel the platform to store it, as the world learned last May when a court ordered OpenAI to preserve all ChatGPT users' logs -- including deleted chats and sensitive chats logged through its API business offering. Sam Altman, CEO of OpenAI, has said such rulings mean even psychotherapy sessions on the platform may not stay private. Another carve out to opting out: AI platforms like Google Gemini may have humans read chats. Data privacy expert Em (she keeps her last name off the Internet) called AI assistants the "archnemesis" of data privacy because their utility relies on assembling massive amounts of data from myriad sources, including individuals. "AI models are inherent data collectors," she told Ars. "They rely on large data collection for training, improvements, operations, and customizations. More often than not, this data is collected without clear and informed consent (from unknowing training subjects or from platform users), and is sent to and accessed by a private company with many incentives to share and monetize this data." The lack of user-control is especially problematic given the nature of LLM interactions, Marlinspike says. Users often treat dialogue as an intimate conversation. Users share their thoughts, fears, transgressions, business dealings, and deepest, darkest secrets as if AI assistants are trusted confidants or personal journals. The interactions are fundamentally different from traditional web search queries, which usually adhere to a transactional model of keywords in and links out. He likens AI use to confessing into a "data lake." Awaking from the nightmare that is today's AI landscape In response, Marlinspike has developed and is now trialing Confer. In much the way Signal uses encryption to make messages readable only to parties participating in a conversation, Confer protects user prompts, AI responses, and all data included in them. And just like Signal, there's no way to tie individual users to their real-world identity through their email address, IP address, or other details. "The character of the interaction is fundamentally different because it's a private interaction," Marlinspike told Ars. "It's been really interesting and encouraging and amazing to hear stories from people who have used Confer and had life-changing conversations. In part because they haven't felt free to include information in those conversations with sources like ChatGPT or they had insights using data that they weren't really free to share with ChatGPT before but can using an environment like Confer." One of the main ingredients of Confer encryption is passkeys. The industry-wide standard generates a 32-byte encryption keypair that's unique to each service a user logs into. The public key is sent to the server. The private key is stored only on the user device, inside protected storage hardware that hackers (even those with physical access) can't access. Passkeys provide two-factor authentication and can be configured to log in to an account with a fingerprint, face scan (both of which also stay securely on a device), or a device unlock PIN or passcode. The private key allows the device to log in to Confer and encrypt all input and output with encryption that's widely believed to be impossible to break. That allows users to store conversations on Confer servers with confidence that they can't be read by anyone other than themselves. The storage allows conversations to sync across other devices the user owns. The code making this all work is available for anyone to inspect. It looks like this: const assertion = await navigator.credentials.get({ mediation: "optional", publicKey: { challenge: crypto.getRandomValues(new Uint8Array(32)), allowCredentials: [{ id: credId, type: "public-key" }], userVerification: "required", extensions: { prf: { eval: { first: new Uint8Array(salt) } } } } }) as PublicKeyCredential; const { prf } = assertion.getClientExtensionResults(); const rawKey = new Uint8Array(prf.results.first); This robust internal engine is fronted by a user interface (shown in the two images above) that's deceptively simple. In just two strokes, a user is logged in, and all previous chats are decrypted. These chats are then available to any device logged into the same account. This way, Confer can sync chats without compromising privacy. The ample 32 bytes of key material allow the private key to change regularly, a feature that allows for forward secrecy, meaning that in the event a key is compromised, an attacker cannot read previous or future chats. The other main Confer ingredient is a TEE on the platform servers. TEEs encrypt all data and code flowing through the server CPU, protecting them from being read or modified by someone with administrative access to the machine. The Confer TEE also provides remote attestation. Remote attestation is a digital certificate sent by the server that cryptographically verifies that data and software are running inside the TEE and lists all software running on it. On Confer, remote attestation allows anyone to reproduce the bit-by-bit outputs that confirm that the publicly available proxy and image software -- and only that software -- is running on the server. To further verify Confer is running as promised, each release is digitally signed and published in a transparency log. Native support for Confer is available in the most recent versions of macOS, iOS, and Android. On Windows, users must install a third-party authenticator. Linux support also doesn't exist, although this extension bridges that gap. There are other private LLMs, but none from the big players Another publicly available LLM offering E2EE is Lumo, provided by Proton, a European company that's behind the popular encrypted email service. It adopts the same encryption engine used by Proton Mail, Drive, and Calendar. The internals of the engine are considerably more complicated than Confer because they rely on a series of both symmetric and asymmetric keys. The end result for the user is largely the same, however. Once a user authenticates to their account, Proton says, all conversations, data, and metadata is encrypted with a symmetrical key that only the user has. Users can opt to store the encrypted data on Proton servers for device syncing or have it wiped immediately after the conversation is finished. A third LLM provider promising privacy is Venice. It stores all data locally, meaning on the user device. No data is stored on the remote server. Most of the big LLM platforms offer a means for users to exempt their conversations and data for marketing and training purposes. But as noted earlier, these promises often come with major carve-outs. Besides selected review by humans, personal data may still be used to enforce terms of service or for other internal purposes, even when users have opted out of default storage. Given today's legal landscape -- which allows most data stored online to be obtained with a subpoena -- and the regular occurrence of blockbuster data breaches by hackers, there can be no reasonable expectation that personal data remains private. It would be great if big providers offered end-to-end encryption protections, but there's currently no indication they plan to do so. Until then, there are a handful of smaller alternatives that will keep user data out of the ever-growing data lake.
[2]
Signal's Founder Turns His Attention to AI's Privacy Problem
Confer, an open source chatbot, encrypts both prompts and responses so companies and advertisers canΓ’β¬β’t access user data. The founder of Signal has been quietly working on a fully end-to-end encrypted, open-source AI chatbot designed to keep usersΓ’β¬β’ conversations secret. In a series of blog posts, Moxie Marlinspike makes clear that while he is a fan of large language models, heΓ’β¬β’s uneasy about how little privacy most AI platforms currently provide. Marlinspike argues that, like Signal, a chatbotΓ’β¬β’s interface should accurately reflect whatΓ’β¬β’s happening under the hood. Signal looks like a private one-on-one conversation because it is one. Meanwhile, chatbots like ChatGPT and Claude feel like a safe space for intimate exchanges or a private journal, even though usersΓ’β¬β’ conversations can be accessed by the company behind them and sometimes used for training. In other words, if a chatbot feels like youΓ’β¬β’re having a private conversation, Marlinspike says it should actually work that way too.Γ He says this is especially important because LLMs represent the first major tech medium that Γ’β¬Εactively invites confession.Γ’β¬ As people chat with these systems, they end up sharing a lot about how their brain works, including thinking patterns and uncertainties.Γ Marlinspike warns that this kind of info could easily be turned against users, with advertisers eventually exploiting insights about them to sell products or influence behavior. His proposed solution is Confer, an AI chatbot that encrypts both prompts and responses so that only the user can access them.Γ Γ’β¬ΕConfer is designed to be a service where you can explore ideas without your own thoughts potentially conspiring against you someday; a service that breaks the feedback loop of your thoughts becoming targeted ads becoming thoughts; a service where you can learn about the world Γ’β¬" without data brokers and future training runs learning about you instead,Γ’β¬ wrote Marlinspike Signal was founded in 2014 around similar principles, and its open-source encrypted messaging protocol was eventually adopted by MetaΓ’β¬β’s WhatsApp just a few years later. So, it's possible Meta and other tech giants could eventually adopt ConferΓ’β¬β’s technology as well. According to Marlinspike, Confer is designed so that usersΓ’β¬β’ conversations are encrypted before they ever leave their devices, similar to how Signal works. Prompts are encrypted on a userΓ’β¬β’s computer or phone and sent to ConferΓ’β¬β’s servers in that form, then decrypted only in a secure data environment to generate a response. Confer does this by using a mix of security tools. Instead of traditional passwords, it uses passkeys, such as Face ID, Touch ID, or a device unlock PIN on verified usersΓ’β¬β’ devices, to derive encryption keys.Γ When it comes time for the AI to respond, Confer uses what it calls confidential computing, where hardware-enforced isolation is used to run code in a Trusted Execution Environment (TEE). Γ’β¬ΕThe host machine provides CPU, memory, and power, but cannot access the TEEΓ’β¬β’s memory or execution state,Γ’β¬ Marlinspike explained. With the LLMΓ’β¬β’s Γ’β¬Εthinking,Γ’β¬ or inference, running in a confidential virtual machine, the response is then encrypted and sent back to the user. The hardware also produces cryptographic proof, known as attestation, that allows a userΓ’β¬β’s device to verify that everything is running as it should.
[3]
Signal's founder is taking on ChatGPT -- here's why the 'truly private AI' can't leak your chats
Unlike ChatGPT or Gemini, Confer doesn't collect or store your data for training, logging, or legal access The man who made private messaging mainstream now wants to do the same for AI. Signal creator Moxie Marlinspike has launched a new AI assistant called Confer, built around similar privacy principles. Conversations with Confer can't be read even by server administrators. The platform encrypts every part of the user interaction by default and runs in what's called a trusted execution environment, never letting sensitive user data leave that encrypted bubble. There's no saved data checked on, used for training, or sold to other companies. Confer is an outlier in this way, as data is usually considered the value of making an AI chatbot free. But as consumer trust in AI privacy is already strained, the appeal is obvious. People are noticing that what they say to these systems doesn't always stay private. A court order last year forced OpenAI to retain all ChatGPT user logs, even deleted ones, for potential legal discovery, and ChatGPT chats even showed up in Google Search results for a while, thanks to accidentally public links. There was also an uproar over contractors reviewing anonymized chatbot transcripts that included personal health information. Confer's data is encrypted before it even reaches the server, using passkeys stored only on the user's device. Those keys are never uploaded or shared. Confer supports syncing chats between devices, yet thanks to cryptographic design choices not even Confer's creators can unlock them. It's ChatGPT with Signal security. Confer's design goes one step further than most privacy-first products by offering a feature called remote attestation. This allows any user to verify exactly what code is running on Confer's servers. The platform publishes the software stack in full, and digitally signs every release. This may not matter to every user. But for developers, organizations, and watchdogs trying to assess how their data is handled, it's a radical level of security that might allow some concerned AI chatbot fans to breathe easier. Not that there aren't privacy settings on other AI chatbots. There are actually quite a few that users can review, even if they don't think to do so until after they've already said something personal. ChatGPT, Gemini, and Meta AI all provide opt-out toggles for things like chat history, allowing data to be used for training, or outright removing data. But the default state is surveillance, and opting out is a user's responsibility. Confer inverts that setup by making the most private setup the default. It's baked in, though, which also highlights how most privacy tools are reactive. It might at least raise awareness, if not consumer demand for more AI chatbots that forget. Organizations like schools and hospitals interested in AI might be enticed by tools that guarantee confidentiality by design.
Share
Share
Copy Link
Moxie Marlinspike, creator of Signal Messenger, has unveiled Confer, an open-source AI assistant that encrypts all user interactions by default. Unlike ChatGPT or Gemini, Confer ensures conversations remain unreadable to platform operators, hackers, and law enforcement. The platform uses trusted execution environments and passkeys to protect user data from collection, training, or legal access.
Moxie Marlinspike, the engineer who set a new standard for private messaging with Signal Messenger, is now turning his attention to AI privacy with the launch of Confer, an open-source AI assistant designed to protect user conversations from prying eyes
1
. The Signal founder aims to address growing data privacy concerns surrounding large language models (LLMs), which he describes as "inherent data collectors" that accumulate massive amounts of personal information without clear consent1
.
Source: Gizmodo
Unlike ChatGPT, Gemini, or other mainstream AI platforms, Confer encrypts both prompts and responses so that only users can access their conversations
2
. The private AI chatbot operates on the principle that if an interface feels like a private conversation, it should function that way under the hood. Moxie Marlinspike argues that LLMs represent the first major tech medium that "actively invites confession," with users sharing thinking patterns, fears, business dealings, and deepest secrets as if chatting with trusted confidants2
.
Source: TechRadar
Confer's architecture ensures that conversations are encrypted before they ever leave users' devices, similar to how Signal works
2
. The platform relies on passkeysβindustry-wide standards that generate unique 32-byte encryption keypairs for each service. Public keys are sent to servers while private keys remain securely stored on user devices, inside protected storage hardware that even hackers with physical access cannot breach1
. Users can authenticate through Face ID, Touch ID, or device unlock PINs.The truly private AI leverages Trusted Execution Environments (TEEs) through confidential computing, where hardware-enforced isolation runs code in secure enclaves
2
. The host machine provides CPU, memory, and power, but cannot access the TEE's memory or execution state. Data and conversations originating from users and resulting LLM responses are encrypted within this environment, preventing even server administrators from viewing or tampering with them1
. Conversations stored by Confer remain in encrypted form using keys that never leave user devices.Confer goes beyond typical privacy tools by offering remote attestation, allowing any user to verify exactly what code is running on the platform's servers
3
. The entire software stackβincluding large language models and back-end componentsβruns on open source software that users can cryptographically verify is in place. The platform publishes every release with digital signatures, providing developers, organizations, and watchdogs radical transparency to assess how their data is handled3
.The launch of Confer addresses escalating concerns about how AI platforms handle sensitive information. All major platforms must turn over user data to law enforcement or private parties when presented with valid subpoenas
1
. Last May, a court ordered OpenAI to preserve all ChatGPT user logsβincluding deleted chats and sensitive conversations logged through its API business offering. OpenAI CEO Sam Altman acknowledged such rulings mean even psychotherapy sessions on the platform may not stay private1
. Additionally, ChatGPT chats appeared in Google Search results for a period due to accidentally public links, and platforms like Google Gemini may have humans read chats even when users opt out1
.Moxie Marlinspike warns that information shared with AI assistants could be weaponized, with advertisers exploiting insights about thinking patterns to sell products or influence behavior
2
. He describes AI use without privacy protections as confessing into a "data lake," where personal revelations become fodder for data collection, training runs, and monetization1
. Data privacy expert Em notes that AI models rely on large data collection for training, improvements, operations, and customizations, often without clear and informed consent1
.
Source: Ars Technica
Related Stories
Confer inverts the standard AI privacy model by making the most private setup the default rather than requiring users to opt out
3
. While ChatGPT, Gemini, and Meta AI provide toggles for chat history and training data usage, the default state remains surveillance-oriented, placing responsibility on users to protect themselves3
. Confer's approach ensures there's no saved data for training, logging, or legal accessβa stark departure from platforms where data collection is considered the value proposition for free AI chatbots3
.The platform's design enables users to sync chats between devices while maintaining encryption that not even Confer's creators can unlock
3
. Marlinspike reports that early users have had "life-changing conversations" specifically because they felt free to include information they wouldn't share with ChatGPT or use data they previously couldn't share with other AI platforms1
.Signal was founded in 2014 around similar privacy principles, and its open-source encrypted messaging protocol was eventually adopted by Meta's WhatsApp just a few years later
2
. This precedent suggests tech giants could potentially adopt Confer's technology as consumer demand for AI privacy grows. Organizations like schools and hospitals interested in AI might be particularly drawn to tools that guarantee confidentiality by design3
. As consumer trust in AI privacy remains strained, Confer's launch may raise awareness and shift expectations about what privacy protections should be standard in AI interactions.Summarized by
Navi
26 Jul 2025β’Technology

25 Jun 2025β’Technology

25 Jun 2025β’Policy and Regulation

1
Technology

2
Policy and Regulation

3
Technology
