The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 18 Dec, 12:05 AM UTC
6 Sources
[1]
OpenAI's makes the full version of its o1 reasoning model available, but only to some developers - SiliconANGLE
OpenAI has said it's making the full version of its o1 reasoning model available to its most committed developer customers. Today's announcement came on the ninth day of its holiday-themed press blitz, known as "12 Days of OpenAI", where the company said it's rolling out access to the full o1 model for developers in the "Tier 5" category only. That means it's restricted to developers who have had an account with OpenAI for at least one month, and who are spending at least $1,000 per month on its services. Prior to today's announcement, developers could only access the less powerful o1-preview model. In addition to the restrictions on its use, the full version of the o1 reasoning model is very expensive, due to the enormous computing resources required to power it. According to the company, it will cost $15 for every 750,000 words analyzed, and $60 for every 750,000 words it generates. That makes it almost four-times as expensive as the more widely used GPT-4o model. Fortunately, those who are prepared to pay the higher prices will at least get some new capabilities, as OpenAI has made a number of improvements compared to the preview iteration. For one thing, the full version of o1 is more customizable than the older version. There's a new "reasoning_effort" parameter that dictates how long the model will ponder a specific question. It also supports function calling, which means it can be connected to external data sources, plus developer messages and image analysis, which were not supported by the o1-preview model. Its latency has been reduced too, as it uses around 60% fewer reasoning tokens on average. In other news, OpenAI said it's incorporating the GPT-4o and 4o-mini models into its Realtime application programming interface, which is designed for low-latency vocal AI applications such as Advanced Voice Mode. The Realtime API also gets support for WebRTC, which is an open standard for developing vocal AI applications in web browsers. So that suggests we may well see a lot more websites trying to talk to their users in the coming months. "Our WebRTC integration is designed to enable smooth and responsive interactions in real-world conditions, even with variable network quality," OpenAI said in a blog post. "It handles audio encoding, streaming, noise suppression, and congestion control." Finally, there's a new feature called "direct preference optimization" for developers who want to fine-tune their AI models. With its existing techniques for supervised fine-tuning, developers are required to provide examples of the input/output pairs they want to use to refine their models. But with this new feature, they can instead just provide two different responses and indicate which one is preferable to another. According to the company, this will help optimize models to learn the difference between the user's preferred and non-preferred answers, automatically detecting any changes in formatting, style guidelines or verbosity, and factor these into the new model. The update is one of the most exciting so far in OpenAI's 12-day media bonanza, following the launch of the Sora video generation model, a new Projects feature, and updates to Advanced Voice Mode, Canvas and Search.
[2]
OpenAI opens up developer access to the full o1 reasoning model
On the ninth day of OpenAI's holiday press blitz, the company announced that it is releasing the full version of its o1 reasoning model to select developers through the company's API. Until Tuesday's news, devs could only access the less-capable o1-preview model. According to the company, the full o1 model will begin rolling out to folks in OpenAI's "Tier 5" developer category. Those are users that have had an account for more than a month and who spend at least $1,000 with the company. The new service is especially pricey for users (on account of the added compute resources o1 requires), costing $15 for every (roughly) 750,000 words analyzed and $60 for every (roughly) 750,000 words generated by the model. That's three to four times the cost of performing the same tasks with GPT-4o. Recommended Videos At those prices, OpenAI made sure to improve the full model's capabilities over the preview iteration's. The new o1 model is more customizable than its predecessor (its new "reasoning_effort" parameter dictates how long the AI ponders a given question) and offers function calling, developer messages, and image analysis, all of which were missing from the o1-preview. The company also announced that it is incorporating its GPT-4o and 4o-mini models into its Realtime API, which is built for low-latency, vocal-AI applications (like Advanced Voice Mode). The API also now supports WebRTC, the industry's open standard for developing vocal-AI applications in web browsers, so get ready for a whole bunch more websites trying to talk to you come 2025. "Our WebRTC integration is designed to enable smooth and responsive interactions in real-world conditions, even with variable network quality," OpenAI wrote in its announcement. "It handles audio encoding, streaming, noise suppression, and congestion control." OpenAI has so far, as part of the live-stream event, unveiled the full version of o1 (in addition to Tuesday's announcement), released its Sora video generation model, debuted it new Projects feature, and provided multiple updates to its Canvas, Search and Advanced Voice Mode features. With only three days left before the event's finale, what will OpenAI show off next? We'll have to wait and see.
[3]
OpenAI brings its o1 reasoning model to its API -- for certain developers | TechCrunch
OpenAI is bringing o1, its "reasoning" AI model, to its API -- but only for certain developers to start. Starting today, o1 will begin rolling out to devs in OpenAI's "tier 5" usage category, the company said. To qualify for tier 5, developers have to spend at least $1,000 with OpenAI and have an account that's older than 30 days since their first successful payment. o1 replaces the o1-preview model that was already available in the API. Unlike most AI, so-called reasoning models like o1 effectively fact-check themselves, which helps them avoid some of the pitfalls that normally trip up models. As a drawback, they often take longer to arrive at solutions. They're also quite pricey -- in part because they require a lot of computing resources to run. OpenAI charges $15 for every ~750,000 words o1 analyzes and $60 for every ~750,000 words the model generates. That's between 3x and 4x the cost of OpenAI's latest "non-reasoning" model, GPT-4o. O1 in the OpenAI API is far more customizable than o1-preview, thanks to new features like function calling (which allows the model to be connected to external data), developer messages (which lets devs instruct the model on tone and style), and image analysis. In addition to structured outputs, o1 also has an API parameter, "reasoning_effort," that enables control over how long the model "thinks" before responding to a query. OpenAI said that the version of o1 in the API -- and, soon, the company's AI chatbot platform, ChatGPT -- is a "new post-trained" version of o1. Compared to the o1 model released in ChatGPT two weeks ago, this one, "o1-2024-12-17," improves on "areas of model behavior based on feedback," OpenAI vaguely said. "We are rolling out access incrementally while working to expand access to additional usage tiers and ramping up rate limits," the company wrote in a blog post provided to TechCrunch. In other dev-related news today, OpenAI announced new versions of its GPT-4o and GPT-4o mini models as part of the Realtime API, OpenAI's API for building apps with low-latency, AI-generated voice responses. The new models ("gpt-4o-realtime-preview-2024-12-17" and "gpt-4o-mini-realtime-preview-2024-12-17"), which boast improved data efficiency and reliability, are also cheaper to use, OpenAI said. Speaking of the Realtime API (no pun intended), it remains in beta, but it's gained several new capabilities, like concurrent out-of-band responses, which enables background tasks such as content moderation to run without interrupting interactions. The API also now supports WebRTC, the open standard for building real-time voice applications for browser-based clients, smartphones, and internet of things devices. In what's certainly no coincidence, OpenAI hired the creator of WebRTC, Justin Uberti, in early December. "Our WebRTC integration is designed to enable smooth and responsive interactions in real-world conditions, even with variable network quality," OpenAI wrote in the blog. "It handles audio encoding, streaming, noise suppression, and congestion control." In the last of its updates Tuesday, OpenAI brought preference fine-tuning to its fine-tuning API; preference fine-tuning compares pairs of model responses to "teach" a model to distinguish between preferred and "non-preferred" responses. And the company launched a beta for official software developer kits in the programming languages Go and Java.
[4]
OpenAI o1 API offers fact-checking AI but developers will pay a premium
OpenAI has introduced its new o1 reasoning model to its API, rolling it out to selected developers starting December 17, 2024. The launch comes as part of a broader update that also includes new features enhancing functionality and customization for developers. To qualify for usage, developers must spend at least $1,000 and maintain accounts older than 30 days. "Today we're introducing more capable models, new tools for customization, and upgrades that improve performance, flexibility, and cost-efficiency for developers building with AI." -OpenAI The o1 model supersedes the previous o1-preview, boasting capabilities that allow it to fact-check its own responses, an advantage not commonly found in AI models. As a trade-off, the reasoning model tends to take longer to generate answers. The cost for processing with o1 is significant; it charges developers $15 for every 750,000 words analyzed and $60 for generated content, marking a sixfold increase compared to the latest non-reasoning model, GPT-4o. The new o1 is designed to improve on earlier limitations, with OpenAI asserting that it offers "more comprehensive and accurate responses," particularly for technical queries related to programming and business. It includes enhancements such as a reasoning effort parameter that allows developers to control the processing time for queries. Additionally, the model is more adaptable than its predecessor, supporting functions like developer messages to customize chatbot behavior and enabling structured outputs using a JSON schema. To facilitate more dynamic interactions, OpenAI has improved its function calling capabilities, allowing the model to utilize pre-written external functions when generating answers. This API iteration reportedly requires 60% fewer tokens for processing compared to o1-preview, while also achieving a higher accuracy rate -- between 25 to 35 percentage points more on benchmarks such as LiveBench and AIME. OpenAI also expanded its capabilities concerning real-time interactions through its Realtime API, now supporting WebRTC for smoother audio communication. This addition aims to simplify integration for developers, significantly reducing the complexity of code from approximately 250 lines to about a dozen. Furthermore, OpenAI has cut the cost of o1 audio tokens by 60% and mini tokens by 90% to encourage usage among developers. "Our WebRTC integration is designed to enable smooth and responsive interactions in real-world conditions, even with variable network quality," OpenAI wrote in the blog. "It handles audio encoding, streaming, noise suppression, and congestion control." Another significant update includes a new method for fine-tuning AI models called direct preference optimization. This allows model trainers to provide two outputs and specify a preference without needing to supply exact input/output examples for every scenario. OpenAI claims this method enhances the model's ability to adapt to various quirks in response style, formatting, and helpfulness. Developers in programming languages like Go and Java can now access new software development kits (SDKs) designed for easier API integration. As these updates progress, OpenAI plans to expand access and increase rate limits for more developers beyond the initial tier 5 category.
[5]
OpenAI opens its most powerful model, o1, up to third-party developers
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More On the 9th day of its holiday-themed product announcements known as "12 Days of OpenAI," OpenAI is rolling out its most advanced model, o1, to third-party developers through its application programming interface (API). This marks a major step forward for devs looking to build new advanced AI applications or integrate the most advanced OpenAI tech into their existing apps and workflows, be they enterprise or consumer-facing. If you aren't yet acquainted with OpenAI's o1 series, here's the rundown: it was announced back in September 2024, the first in a new "family" of models from the ChatGPT company, moving beyond the large language models (LLMs) of the GPT-family series, offering "reasoning" capabilities. Basically, the o1 family of models -- o1 and o1 mini -- take longer to respond to a user's prompts with answers, but check themselves while they are formulating an answer to see if they're correct and avoid hallucinations. At the time, OpenAI said it could handle more complex, PhD level problems -- something borne out by real world users, as well. While developers previously had access to a preview version of o1 that they could use to build their own apps atop of -- say, a PhD advisor or lab assistant -- the production-ready release of the full o1 model through the API brings improved performance, lower latency, and new features that make it easier to integrate into real-world applications. OpenAI previously made o1 available to consumers through its ChatGPT Plus and Pro plans roughly two and a half weeks ago, and added the capability for the models to analyze and respond to imagery and files uploaded by users -- too. Alongside today's launch, OpenAI announced significant updates to its Realtime API, price reductions, and a new fine-tuning method that provides developers with greater control over their models. The full o1 model is now available to developers through OpenAI's API The new o1 model, available as o1-2024-12-17, is designed to excel at complex, multi-step reasoning tasks. Compared to the earlier o1-preview version, this release improves accuracy, efficiency, and flexibility. OpenAI reports significant gains across a range of benchmarks, including coding, mathematics, and visual reasoning tasks. For example, coding results on SWE-bench Verified increased from 41.3 to 48.9, while performance on the math-focused AIME test jumped from 42 to 79.2. These improvements make o1 well-suited for building tools that streamline customer support, optimize logistics, or solve challenging analytical problems. Several new features enhance o1's functionality for developers. Structured Outputs allow responses to reliably match custom formats such as JSON schemas, ensuring consistency when interacting with external systems. Function calling simplifies the process of connecting o1 to APIs and databases, while the ability to reason over visual inputs opens up use cases in manufacturing, science, and coding. Developers can also fine-tune o1's behavior using the new reasoning_effort parameter, which controls how long the model spends on a task to balance performance and response time. OpenAI's Realtime API gets a boost to power intelligent, conversational voice/audio AI assistants OpenAI also announced updates to its Realtime API, designed to power low-latency, natural conversational experiences like voice assistants, live translation tools, or virtual tutors. A new WebRTC integration simplifies building voice-based apps by providing direct support for audio streaming, noise suppression, and congestion control. Developers can now integrate real-time capabilities with minimal setup, even in variable network conditions. OpenAI is also introducing new pricing for its Realtime API, reducing costs by 60% for GPT-4o audio to $40 per 1 million input tokens and $80 per 1 million output tokens. Cached audio input costs are reduced by 87.5%, now priced at $2.50 per 1 million input tokens. To further improve affordability, OpenAI is adding GPT-4o mini, a smaller, cost-efficient model priced at $10 per 1 million input tokens and $20 per 1 million output tokens. Text token rates for GPT-4o mini are also significantly lower, starting at $0.60 for input tokens and $2.40 for output tokens. Beyond pricing, OpenAI is giving developers more control over responses in the Realtime API. Features like concurrent out-of-band responses allow background tasks, such as content moderation, to run without interrupting the user experience. Developers can also customize input contexts to focus on specific parts of a conversation and control when voice responses are triggered for more accurate and seamless interactions. Preference Fine-Tuning offers new customization options Another major addition is Preference Fine-Tuning, a method for customizing models based on user and developer preferences. Unlike Supervised Fine-Tuning, which relies on exact input-output pairs, Preference Fine-Tuning uses pairwise comparisons to teach the model which responses are preferred. This approach is particularly effective for subjective tasks, such as summarization, creative writing, or scenarios where tone and style matter. Early testing with partners like Rogo AI, which builds assistants for financial analysts, shows promising results. Rogo reported that Preference Fine-Tuning helped their model handle complex, out-of-distribution queries better than traditional fine-tuning, improving task accuracy by over 5%. The feature is now available for gpt-4o-2024-08-06 and gpt-4o-mini-2024-07-18, with plans to expand support to newer models early next year. New SDKs for Go and Java developers To streamline integration, OpenAI is expanding its official SDK offerings with beta releases for Go and Java. These SDKs join the existing Python, Node.js, and .NET libraries, making it easier for developers to interact with OpenAI's models across more programming environments. The Go SDK is particularly useful for building scalable backend systems, while the Java SDK is tailored for enterprise-grade applications that rely on strong typing and robust ecosystems. With these updates, OpenAI is offering developers an expanded toolkit to build advanced, customizable AI-powered applications. Whether through o1's improved reasoning capabilities, Realtime API enhancements, or fine-tuning options, OpenAI's latest offerings aim to deliver both performance and cost-efficiency for businesses pushing the boundaries of AI integration.
[6]
OpenAI gifts developers with enhanced voice and reasoning models
OpenAI announced numerous new options for developers who use their technology to build products and services, promising the upgrades will "improve performance, flexibility, and cost-efficiency." In their live announcement today -- which suffered from audio problems -- the OpenAI team first highlighted changes to OpenAI o1, the company's reasoning model that can "handle complex multi-step tasks," according to the company. Developers can now utilize the model on their highest usage tier; it's currently used by developers to build automated customer service systems, help inform supply chain decisions, and even forecast financial trends. The new o1 model can also connect to external data and APIs (aka Application Programming Interfaces, which is how different software applications communicate with each other). Developers can also use o1 to fine-tune messaging to give their AI applications a specific tone and style; the model also has vision capabilities so it can use images to "unlock many more applications in science, manufacturing, or coding, where visual inputs matter." Improvement were also announced for OpenAI's Realtime API, which developers utilize for voice assistants, virtual tutors, translation bots, and AI Santa voices. The company's new WebRTC Support will help in real-time voice services, utilizing JavaScript to ostensibly create better audio quality and more helpful responses (e.g., the RealTime API can start formulating responses to a query even while a user is still speaking). OpenAI also announced price reductions for services like WebRTC Support. Also of note, OpenAI is now offering Preference Fine-Tuning to developers, which customizes the technology to respond better to "subjective tasks where tone, style, and creativity matter" than so-called Supervised Fine-Tuning. Catch the full presentation below.
Share
Share
Copy Link
OpenAI has made its advanced o1 reasoning model available to select developers, offering improved AI capabilities but at a premium cost. The release includes updates to the Realtime API and new fine-tuning methods.
OpenAI has announced the release of its full o1 reasoning model to select developers through its API, marking a significant advancement in AI technology. This release is part of OpenAI's "12 Days of OpenAI" holiday-themed press blitz, occurring on the ninth day of the event 12.
The full o1 model is initially available to developers in OpenAI's "Tier 5" category, which includes users who have had an account for over a month and spend at least $1,000 monthly on OpenAI services 13. The pricing for o1 is notably higher than previous models, reflecting the increased computing resources required:
This pricing structure makes o1 three to four times more expensive than the GPT-4o model 24.
The full o1 model offers several improvements over its preview version:
OpenAI has also announced updates to its Realtime API, designed for low-latency vocal AI applications:
OpenAI introduced a new fine-tuning method called "direct preference optimization" or "preference fine-tuning":
The release of the full o1 model represents a significant step in AI development, offering more advanced reasoning capabilities to developers. However, the high cost may limit its accessibility, potentially impacting its adoption rate and use cases in various industries 12345.
Reference
[1]
[2]
[3]
OpenAI's DevDay 2024 unveiled groundbreaking updates to its API services, including real-time voice interactions, vision fine-tuning, prompt caching, and model distillation techniques. These advancements aim to enhance developer capabilities and unlock new possibilities in AI-powered applications.
5 Sources
5 Sources
OpenAI has broadened the availability of its O1 model, granting access to all ChatGPT Enterprise and ChatGPT Education users. This expansion marks a significant step in AI accessibility for businesses and educational institutions.
2 Sources
2 Sources
OpenAI introduces o1-pro, an enhanced version of its o1 reasoning model, offering improved performance at a significantly higher cost. The new API targets developers and AI agents, promising better responses for complex problems.
7 Sources
7 Sources
OpenAI has introduced its latest AI model series, O1, featuring enhanced reasoning abilities and specialized variants. While showing promise in various applications, the models also present challenges and limitations.
5 Sources
5 Sources
OpenAI introduces the O1 model, showcasing remarkable problem-solving abilities in mathematics and coding. This advancement signals a significant step towards more capable and versatile artificial intelligence systems.
11 Sources
11 Sources