Thinking Machines Lab Unveils Tinker: An API for AI Model Fine-Tuning

Reviewed byNidhi Govil

5 Sources

Share

Former OpenAI CTO Mira Murati's startup, Thinking Machines Lab, launches Tinker—an API service designed to democratize and simplify the fine-tuning of large language models for researchers and developers.

Thinking Machines Lab Introduces Tinker: Revolutionizing AI Model Fine-Tuning

Thinking Machines Lab, an AI startup cofounded by former OpenAI CTO Mira Murati, has launched Tinker. This API service automates and simplifies the fine-tuning of custom frontier AI models

1

, democratizing AI research by allowing more efficient language model customization.

Source: SiliconANGLE

Source: SiliconANGLE

What Tinker Offers

Tinker is a Python based API that empowers developers and researchers to fine-tune large language models (LLMs) with greater control and accessibility

2

. It supports open-source models like Meta's Llama and Alibaba's Qwen for fine-tuning (supervised or reinforcement learning), abstracting away distributed compute and infrastructure complexities

1

.

Source: VentureBeat

Source: VentureBeat

Core Features and Benefits

  1. Direct Control: Tinker provides low-level Python primitives, allowing users to build custom fine-tuning or RL algorithms without managing GPU orchestration

    2

    .
  2. Efficiency: It leverages Low-Rank Adaptation (LoRA) for cost effective fine-tuning, enabling multiple training jobs to share compute pools efficiently

    3

    .
  3. Scalability: The service supports a wide range of open-weight models, from smaller variants to large Mixture-of-Experts architectures like Qwen-235B-A22B

    2

    .
  4. Simplified Infrastructure: Tinker handles scheduling, resource allocation, and failure recovery, letting users concentrate on their algorithms and data

    4

    .

Early Adopters and Impact

Tinker has already been adopted by several research institutions, demonstrating its practical utility. Princeton's Goedel Team utilized it for LLM fine-tuning in formal theorem proving, achieving significant results with less data

2

. Stanford's Rotskoff Lab improved chemical reasoning models, and Berkeley's SkyRL group experimented with multi-agent RL training loops

2

3

.

Source: Silicon Republic

Source: Silicon Republic

Industry experts like Andrej Karpathy and John Schulman have praised Tinker for its balance of algorithmic control and infrastructure abstraction, calling it "the infrastructure I've always wanted"

2

.

Availability and Future

Currently in private beta with a waitlist, Tinker will eventually transition to usage-based pricing

4

. Thinking Machines has also released an open-source 'Tinker Cookbook' for implementing common post-training methods

3

. This initiative marks a significant step towards broadening access to advanced AI capabilities, fostering innovation across AI application fields.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo