Large Language Models (LLMs) are increasingly treated as informal advisers, sounding boards, and sources of emotional reassurance. That trend has prompted serious concern, particularly as general-purpose AI is not designed to offer psychologically sensitive guidance - and has already been implicated in cases of real harm. The AI coaching app Purpose arrives amid uncertainty about how AI should behave with vulnerable users. The company behind it is clear about what the tool is intended to do, and equally clear about what it is not built for.
Purpose's launch materials describe the app as "the world's first AI mentor" built specifically for personal growth, with a multi-model routing system optimized for safety, a long-term memory architecture, and a challenge-based conversational approach. It enters a market where expectations and norms have not yet solidified, and where public scrutiny of AI-driven advice is intensifying.
Raj Singh, who previously founded the AI concierge platform Go Moment (acquired by Booking.com), explains that his interest in self-development started "as a skeptic early on" before expanding into neuroscience, behavioral economics, and personal growth literature over the past two decades.
After selling his last company, he spent a year traveling across 24 countries experimenting with different coaching and reflective practices - a period he describes as pivotal in shaping the idea for Purpose. The experience strengthened his belief that personalized guidance should be easier to access.
The collaboration with Mark Manson, author of the bestselling The Subtle Art of Not Giving a F*ck, emerged unexpectedly. Singh had already begun prototyping AI systems that explored emotional responsiveness rather than task efficiency. He explains:
When I started to prototype things that leveraged the emotional intelligence of these LLMs, I found the results to be startlingly good.
Manson, meanwhile, had been evaluating investment opportunities in AI products for mental well-being. A mutual friend introduced them, and Singh notes that a planned one-hour lunch evolved into several hours of discussion that aligned their thinking.
In its public messaging, Purpose leans on themes familiar from Manson's books - particularly the idea that personal growth often requires acknowledging discomfort rather than avoiding it. The press release describes a platform designed to provide "directional feedback" and "gently point out blind spots," with guidance drawn from established psychological frameworks rather than motivational scripts or generic reassurance.
Singh explains that Purpose is designed to encourage reflection before action. In his own use of the system, the responses often prompt users to sit with an emotion or examine its source before making decisions. He characterizes the tool as aiming to serve as "a wise conversation partner," although he is explicit that this is coaching rather than any clinical relationship. The distinction matters in a market where users increasingly ask AI for therapeutic guidance that AI systems are neither designed nor authorized to provide.
How Purpose is built matters -- it's the basis for the company's claims about what the tool can do and where it stops. Singh describes a "proprietary blend" of major US foundation models - including Claude, GPT, and Gemini - with each routed to specific tasks based on comparative strengths the team observed during development. One model handles conversational flow; another evaluates quantitative signals; a third generates summaries that populate a running "journey" log within the app.
The team argues that this architecture separates Purpose from typical chatbot behavior. If a user expresses stress about a work presentation, for example, the conversational model responds. If the exchange reveals a recurring pattern, Singh explains, a separate model evaluates that signal. All of this sits on top of a custom memory layer that stores durable personal details without re-inserting full conversational transcripts into every prompt.
Singh describes the memory architecture as an attempt to avoid the repetitive introductions common in standard chatbots. Users only need to describe stable facts once - such as the names of colleagues or long-term personal challenges - and the system retains them over time. "The more often a user mentions something, the more weight it will carry," he notes. Singh acknowledges that summarization is intrinsically reductive but argues that repeated themes become reinforced as the system learns what matters to an individual user.
Early feedback suggests users notice the difference. Singh comments:
We've had multiple users tell us something to the effect of: 'this thing is like a hard drive for my brain'.
He contrasts this with the limits of human support:
When you're talking to a different person, that person has their own life. They maybe sat in traffic for 45 minutes this morning. They've got kids to pick up from school. Their bandwidth is not 100% devoted to you.
A recurring tension for the team, Singh admits, is deciding how far the system should lean into warmth versus restraint. The company wants the tool to feel approachable enough that people actually use it, but not so affirming or casual that it resembles a friend rather than a coach. That balance is difficult. The product must avoid becoming sterile on the one hand or emotionally overconfident on the other.
Singh frames this as a question of delivery:
There's the medicine part of the conversation -- the whole expression of the spoonful of sugar helps the medicine go down.
Purpose's team includes individuals with experience in both AI development and wellness product design, including prior roles at Headspace, the meditation and mental health app. Singh argues that this cross-disciplinary background has shaped decisions about how the AI should "push" users without drifting into therapeutic territory. The design aim, as he describes it, is a conversation partner that prompts structured reflection while remaining within the boundaries of coaching.
Purpose presents itself as focused on safety and clear boundaries, and Singh emphasizes that the company is not trying to maximize engagement. He and Manson chose not to raise early venture capital, he explains, because they did not want revenue expectations driving the product toward time-on-device metrics. The stated goal is for users to gain clarity, take action in their offline lives, and ultimately rely on the app less over time.
When we spoke about safety features, Singh described a system designed to detect when users begin asking questions that resemble therapy or express distress that may require professional intervention. In those cases, the app responds with a message indicating that it cannot assist and provides contextualized crisis hotline information instead. "This is not the kind of thing I can help you with," the system tells users before redirecting them.
Singh contrasts this with behavior he has observed in general-purpose models:
If a user says, 'hey, I had a really tough day at work, I need a lot of cheap alcohol right now' - ChatGPT actually just gives you the liquor stores to go find it, versus Purpose says, 'hey, hold on, what's going on?', and isn't trying to say yes to your request'.
The company also outlines a "privacy-first design," including bank-level encryption, subscription-based revenue rather than advertising, and a policy that user conversations are not used for model training on external systems. In addition, the app de-identifies names and other personal markers.
Speaking about how Purpose might fit into someone's life over time, Singh describes two patterns emerging so far. People experiencing a major life transition may engage intensely at first and then settle into occasional check-ins once the acute period passes. Others focused on long-term self-improvement may begin with deeper conversations and then shift toward short, frequent sessions once the system has built sufficient context.
As the product develops, Singh notes that the team is exploring features to support accountability. With the new year approaching, he expects users to turn to Purpose to evaluate whether their goals align with their values and to get support staying on track.
Purpose is positioning itself at the top of Maslow's pyramid - self-actualization, personal growth - but recent headlines suggest vulnerable users often reach for AI from somewhere far lower down. It would be irresponsible not to be mindful of the risks that mismatch creates.