4 Sources
4 Sources
[1]
Exclusive: Mira Murati's Stealth AI Lab Launches Its First Product
Thinking Machines Lab, a heavily funded startup cofounded by prominent researchers from OpenAI, has revealed its first product -- a tool called Tinker that automates the creation of custom frontier AI models. "We believe [Tinker] will help empower researchers and developers to experiment with models, and will make frontier capabilities much more accessible to all people," says Murati, cofounder and CEO of Thinking Machines, in an interview with WIRED ahead of the announcement. Big companies and academic labs already fine-tune open source AI models to create new variants that are optimized for specific tasks, like solving math problems, drafting legal agreements, or answering medical questions. Typically, this work involves acquiring and managing clusters of GPUs and using various software tools to ensure that large-scale training runs are stable and efficient. Tinker promises to allow more businesses, researchers, and even hobbyists to fine-tune their own AI models by automating much of this work. Essentially, the team is betting that helping people fine-tune frontier models will be the next big thing in AI. And there's reason to believe they might be right. Thinking Machines Lab is helmed by researchers who played a core role in the creation of ChatGPT. And, compared to similar tools on the market, Tinker is more powerful and user friendly, according to beta testers I spoke with. Murati says that Thinking Machines Lab hopes to demystify the work involved in tuning the world's most powerful AI models, and make it possible for more people to explore the outer limits of AI. "We're making what is otherwise a frontier capability accessible to all, and that is completely game changing," she says. "There are a ton of smart people out there, and we need as many smart people as possible to do frontier AI research." Tinker currently allows users to fine-tune two open source models: Meta's Llama and Alibaba's Qwen. Users can write a few lines of code to tap into the Tinker API and start fine-tuning through supervised learning, which means adjusting the model with labeled data, or through reinforcement learning, an increasingly popular method for tuning models by giving them positive or negative feedback based on their outputs. Users can then download their fine-tuned model and run it wherever they want. The AI industry is watching the launch closely -- in part due to the caliber of the team behind it.
[2]
Thinking Machines' first official product is here: meet Tinker, an API for distributed LLM fine-tuning
Thinking Machines, the AI startup founded earlier this year by former OpenAI CTO Mira Murati, has launched its first product: Tinker, a Python-based API designed to make large language model (LLM) fine-tuning both powerful and accessible. Now in private beta, Tinker gives developers and researchers direct control over their training pipelines while offloading the heavy lifting of distributed compute and infrastructure management. As Murati wrote in a post on the social network X: "Tinker brings frontier tools to researchers, offering clean abstractions for writing experiments and training pipelines while handling distributed training complexity. It enables novel research, custom models, and solid baselines." Tinker's launch is the first public milestone for Thinking Machines, which raised $2 billion earlier this year from a16z, NVIDIA, Accel, and others. The company's goal is to support more open and customizable AI development -- a mission that appears to resonate with both independent researchers and institutions frustrated by the opaque tooling around today's proprietary models. A Developer-Centric Training API Tinker is not another drag-and-drop interface or black-box tuning service. Instead, it offers a low-level but user-friendly API, giving researchers granular control over loss functions, training loops, and data workflows -- all in standard Python code. The actual training workloads run on Thinking Machines' managed infrastructure, enabling fast distributed execution without any of the usual GPU orchestration headaches. At its core, Tinker offers: * Python-native primitives like and , enabling users to build custom fine-tuning or RL algorithms. * Support for both small and large open-weight models, including Mixture-of-Experts architectures like Qwen-235B-A22B. * Integration with LoRA-based tuning, allowing multiple training jobs to share compute pools, optimizing cost-efficiency. * An open-source companion library called the Tinker Cookbook, which includes implementations of post-training methods. As University of Berkeley computer science PhD student Tyler Griggs wrote on X after testing the API, "Many RL fine-tuning services are enterprise-oriented and don't let you replace training logic. With Tinker, you can ignore compute and just 'tinker' with the envs, algs, and data." Real-World Use Cases Across Institutions Before its public debut, Tinker was already in use across several research labs. Early adopters include teams from, yes, Berkeley as well as Princeton, Stanford,, and Redwood Research, each applying the API to unique model training problems: * Princeton's Goedel Team fine-tuned LLMs for formal theorem proving. Using Tinker and LoRA with just 20% of the data, they matched the performance of full-parameter SFT models like Goedel-Prover V2. Their model, trained on Tinker, reached 88.1% pass@32 on the MiniF2F benchmark and 90.4% with self-correction, beating out larger closed models. * Rotskoff Lab at Stanford used Tinker to train chemical reasoning models. With reinforcement learning on top of LLaMA 70B, accuracy on IUPAC-to-formula conversion jumped from 15% to 50%, a boost researchers described as previously out of reach without major infra support. * SkyRL at Berkeley ran custom multi-agent reinforcement learning loops involving async off-policy training and multi-turn tool use -- made tractable thanks to Tinker's flexibility. * Redwood Research used Tinker to RL-train Qwen3-32B on long-context AI control tasks. Researcher Eric Gan shared that without Tinker, he likely wouldn't have pursued the project, noting that scaling multi-node training had always been a barrier. These examples demonstrate Tinker's versatility -- it supports both classical supervised fine-tuning and highly experimental RL pipelines across vastly different domains. Community Endorsements from the AI Research World The Tinker announcement sparked immediate reactions from across the AI research community. Former OpenAI co-founder and former Tesla AI head Andrej Karpathy (now head of AI-native school Eureka Labs) praised Tinker's design tradeoffs, writing on X: "Compared to the more common and existing paradigm of 'upload your data, we'll post-train your LLM,' this is, in my opinion, a more clever place to slice up the complexity of post-training." He added that Tinker lets users retain ~90% of algorithmic control while removing ~90% of infrastructure pain. John Schulman, former co-founder of OpenAI and now chief scientist and a co-founder of Thinking Machines, described Tinker on X as "the infrastructure I've always wanted," and included a quote attributed to late British philosopher and mathematician Alfred North Whitehead: "Civilization advances by extending the number of important operations which we can perform without thinking of them." Others noted how clean the API was to use and how smoothly it handled RL-specific scenarios like parallel inference and checkpoint sampling. Philipp Moritz and Robert Nishihara, co-founders of AnyScale and creators of the widely used open source AI applications scaling framework Ray, highlighted the opportunity to combine Tinker with distributed compute frameworks for even greater scale. Free to Start, Pay-As-You-Go Pricing Coming Soon Tinker is currently available in private beta, with a waitlist sign-up open to developers and research teams. During the beta, use of the platform is free. A usage-based pricing model will be introduced in the coming weeks. For organizations interested in deeper integration or dedicated support, the company invites inquiries through its website. Background on Thinking Machines and OpenAI Exodus Thinking Machines was founded by Mira Murati, who served as CTO of OpenAI until her departure in September 2024. Her exit followed a period of organizational instability at OpenAI and marked one of several high-profile researcher departures, especially on OpenAI's superalignment team, which has since been disbanded. Murati announced her new company's vision in early 2025, emphasizing three pillars: * Helping people adapt AI systems to their specific needs * Building strong foundations for capable and safe AI * Fostering open science through public releases of models, code, and research In July, Murati confirmed that the company had raised $2 billion, positioning Thinking Machines as one of the most well-funded independent AI startups. Investors cited the team's experience in core breakthroughs like ChatGPT, PPO, TRPO, PyTorch, and the OpenAI Gym. The company distinguishes itself by focusing on multimodal AI systems that collaborate with users through natural communication, rather than aiming for fully autonomous agents. Its infrastructure and research efforts aim to support high-quality, adaptable models while maintaining rigorous safety standards. Since then, it has also published several research papers on open source techniques that anyone in the machine learning and AI community can use freely. This emphasis on openness, infrastructure quality, and researcher support sets Thinking Machines apart -- even as the open source AI market has become intensely competitive, with numerous companies fielding powerful models that rival the performance of well-capitalized U.S. labs like OpenAI, Anthropic, Google, Meta, and others. As competition for developer mindshare heats up, Thinking Machines is signaling that it's ready to meet demand with a product, technical clarity, and public documentation.
[3]
Mira Murati's Thinking Machines launches first product, Tinker
Former OpenAI CTO Mira Murati's AI start-up Thinking Machines has launched its first product, Tinker, an API for fine-tuning language models. Mira Murati's Thinking Machines has launched its much anticipated first product - Tinker, "a flexible API for fine-tuning language models". Back in July Murati's Thinking Machines Labs attracted $2bn in investment in a round led by A16z (Andreessen Horowitz), seeing it valued at $12bn, before even bringing a product to market. Other investors included chips giants Nvidia and AMD, as well as Accel, ServiceNow, Cisco and Jane Street. As CTO at OpenAI Murati oversaw some of the major developments at the AI giant, including the likes of ChatGPT, and even briefly took over as interim chief executive officer of OpenAI when Sam Altman was removed in November 2023, and subsequently reinstated. "Today we launched Tinker," Murati said in a social media post yesterday (Oct 1). "Tinker brings frontier tools to researchers, offering clean abstractions for writing experiments and training pipelines while handling distributed training complexity. It enables novel research, custom models, and solid baselines. Excited to see what people build." Thinking Machines says Tinker will "empower researchers and hackers to experiment with models by giving them control over the algorithms and data while we handle the complexity of distributed training". "Tinker advances our mission of enabling more people to do research on cutting-edge models and customize them to their needs," it said in its launch statement. "Tinker lets you fine-tune a range of large and small open-weight models, including large mixture-of-experts models such as Qwen-235B-A22B," it explains. "Switching from a small model to a large one is as simple as changing a single string in your Python code." Qwen is Alibaba's answer to DeepSeek. Describing Tinker as a "managed service", the start-up says it runs on its internal clusters and training infrastructure. "We handle scheduling, resource allocation, and failure recovery. This allows you to get small or large runs started immediately, without worrying about managing infrastructure. We use LoRA [low-rank adaptation] so that we can share the same pool of compute between multiple training runs, lowering costs." Thinking Machines emphasises that getting good results will mean getting the details right and to that end it has released an open-source library called the Tinker Cookbook on Github "with modern implementations of post-training methods that run on top of the Tinker API". Tinker is available in private beta for researchers and developers, and they can sign up for a waitlist on Thinking Machines' website. It says it will start onboarding users immediately. While free to start, Thinking Machines has indicated that it will begin introducing usage-based pricing in coming weeks. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[4]
Thinking Machines launches Tinker language model fine-tuning service - SiliconANGLE
Thinking Machines launches Tinker language model fine-tuning service Thinking Machines Lab Inc., the artificial intelligence startup led by former OpenAI executive Mira Murati, today introduced its first commercial offering. Tinker is a cloud-based service that developers can fine-tune, or customize, AI models. It supports more than a half-dozen open-source large language models on launch. Thinking Machines Chief Executive Officer Mira Murati (pictured) launched the startup in February following a two-year stint as OpenAI's Chief Technology Officer. During her tenure at the AI provider, she oversaw the development of ChatGPT and the Dall-E series of image generation models. Thinking Machines' team also includes several other former OpenAI staffers. At the start of the year, rumors emerged that the company was seeking to raise $1 billion from investors. Thinking Machines closed a seed round worth twice as much in July at a $12 billion valuation. The investment included the participation of Nvidia Corp., Advanced Micro Devices Inc. and other major tech firms. Tinker uses AI clusters operated by Thinking Machines to fine-tune customers' language models. According to the company, the service automates tasks such as determining what hardware resources should be allocated to which workloads. If an error emerges during the fine-tuning workflow, Tinker automatically performs recovery. The service uses a technique known as LoRA, or low-rank adaption, to customize customers' AI models. The technology reduces the amount of hardware needed for the task and thereby cuts costs. Fine-tuning an AI model usually requires developers to train all its parameters, the settings that determine how the algorithm processes data. LoRA skips that step. Instead of modifying a model's existing components, the technology extends it with a small number of additional parameters and only trains those new settings. LoRA also reduces hardware usage in other ways. When multiple development teams are building customized versions of the same model, they can share the model's core parameters. That avoids the need to create a separate copy of the model for each project. Thinking Machines has released an open-source toolkit called the Tinker Cookbook to help developers use Tinker. According to the company, the software makes it easier to implement more than a half dozen common fine-tuning workflows. Developers can use it to optimize their models for tasks such as solving math problems and interacting with third-party applications. Tinker is currently in private beta. Thinking Machines says that the service has already been adopted by researchers at Stanford University, AI safety lab Redwood Research and several other organizations.
Share
Share
Copy Link
Mira Murati's AI startup launches Tinker, a powerful API for customizing language models, aiming to democratize frontier AI capabilities.
Thinking Machines Lab, a heavily funded AI startup cofounded by former OpenAI CTO Mira Murati, has unveiled its first product - Tinker, an API designed to automate and simplify the creation of custom frontier AI models
1
2
. This launch marks a significant milestone for the company, which raised $2 billion earlier this year at a $12 billion valuation3
.Source: SiliconANGLE
Tinker aims to make frontier AI capabilities more accessible to researchers, developers, and even hobbyists. The tool automates much of the complex work involved in fine-tuning large language models (LLMs), including managing GPU clusters and ensuring stable, efficient training runs
1
.Murati emphasizes the transformative potential of Tinker: "We're making what is otherwise a frontier capability accessible to all, and that is completely game changing"
1
.Source: Silicon Republic
Tinker offers a Python-based API that provides granular control over the fine-tuning process while handling the complexities of distributed training
2
. Key features include:1
2
Source: VentureBeat
Tinker has already been adopted by several research institutions, demonstrating its versatility across different domains:
Related Stories
The AI community has responded positively to Tinker's launch. Andrej Karpathy, former OpenAI co-founder, praised Tinker's design tradeoffs, noting that it allows users to retain "~90% of algorithmic control while removing ~90% of infrastructure pain" .
John Schulman, chief scientist and co-founder of Thinking Machines, described Tinker as "the infrastructure I've always wanted" .
Tinker is currently available in private beta, with Thinking Machines starting to onboard users immediately. While initially free to use, the company plans to introduce usage-based pricing in the coming weeks
3
4
.As the AI industry closely watches this launch, Tinker's potential to democratize frontier AI capabilities could significantly impact the landscape of AI research and development, enabling a broader range of individuals and organizations to contribute to and benefit from advancements in language models.
Summarized by
Navi
[2]
[3]