Curated by THEOUTPOST
On Sat, 13 Jul, 8:00 AM UTC
13 Sources
[1]
OpenAI working on new reasoning technology under code name 'Strawberry'
The project, details of which have not been previously reported, comes as the Microsoft-backed startup races to show that the types of models it offers are capable of delivering advanced reasoning capabilities.ChatGPT maker OpenAI is working on a novel approach to its artificial intelligence models in a project code-named "Strawberry," according to a person familiar with the matter and internal documentation reviewed by Reuters. The project, details of which have not been previously reported, comes as the Microsoft-backed startup races to show that the types of models it offers are capable of delivering advanced reasoning capabilities. Teams inside OpenAI are working on Strawberry, according to a copy of a recent internal OpenAI document seen by in May. Reuters could not ascertain the precise date of the document, which details a plan for how OpenAI intends to use Strawberry to perform research. The source described the plan to Reuters as a work in progress. The news agency could not establish how close Strawberry is to being publicly available. How Strawberry works is a tightly kept secret even within OpenAI, the person said. The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers. Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time." The spokesperson did not directly address questions about Strawberry. The Strawberry project was formerly known as Q*, which Reuters reported last year was already seen inside the company as a breakthrough. Two sources described viewing earlier this year what OpenAI staffers told them were Q* demos, capable of answering tricky science and math questions out of reach of today's commercially-available models. On Tuesday at an internal all-hands meeting, OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills, according to Bloomberg. An OpenAI spokesperson confirmed the meeting but declined to give details of the contents. Reuters could not determine if the project demonstrated was Strawberry. OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets. Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence. While large language models can already summarize dense texts and compose elegant prose far more quickly than any human, the technology often falls short on common sense problems whose solutions seem intuitive to people, like recognizing logical fallacies and playing tic-tac-toe. When the model encounters these kinds of problems, it often "hallucinates" bogus information. AI researchers interviewed by Reuters generally agree that reasoning, in the context of AI, involves the formation of a model that enables AI to plan ahead, reflect how the physical world functions, and work through challenging multi-step problems reliably. Improving reasoning in AI models is seen as the key to unlocking the ability for the models to do everything from making major scientific discoveries to planning and building new software applications. OpenAI CEO Sam Altman said earlier this year that in AI "the most important areas of progress will be around reasoning ability." Other companies like Google, Meta and Microsoft are likewise experimenting with different techniques to improve reasoning in AI models, as are most academic labs that perform AI research. Researchers differ, however, on whether large language models (LLMs) are capable of incorporating ideas and long-term planning into how they do prediction. For instance, one of the pioneers of modern AI, Yann LeCun, who works at Meta, has frequently said that LLMs are not capable of humanlike reasoning. AI challenges Strawberry is a key component of OpenAI's plan to overcome those challenges, the source familiar with the matter said. The document seen by Reuters described what Strawberry aims to enable, but not how. In recent months, the company has privately been signaling to developers and other outside parties that it is on the cusp of releasing technology with significantly more advanced reasoning capabilities, according to four people who have heard the company's pitches. They declined to be identified because they are not authorized to speak about private matters. Strawberry includes a specialized way of what is known as "post-training" OpenAI's generative AI models, or adapting the base models to hone their performance in specific ways after they have already been "trained" on reams of generalized data, one of the sources said. The post-training phase of developing a model involves methods like "fine-tuning," a process used on nearly all language models today that comes in many flavors, such as having humans give feedback to the model based on its responses and feeding it examples of good and bad answers. Strawberry has similarities to a method developed at Stanford in 2022 called "Self-Taught Reasoner" or "STaR", one of the sources with knowledge of the matter said. STaR enables AI models to "bootstrap" themselves into higher intelligence levels via iteratively creating their own training data, and in theory could be used to get language models to transcend human-level intelligence, one of its creators, Stanford professor Noah Goodman, told Reuters. "I think that is both exciting and terrifying...if things keep going in that direction we have some serious things to think about as humans," Goodman said. Goodman is not affiliated with OpenAI and is not familiar with Strawberry. Among the capabilities OpenAI is aiming Strawberry at is performing long-horizon tasks (LHT), the document says, referring to complex tasks that require a model to plan ahead and perform a series of actions over an extended period of time, the first source explained. To do so, OpenAI is creating, training and evaluating the models on what the company calls a "deep-research" dataset, according to the OpenAI internal documentation. Reuters was unable to determine what is in that dataset or how long an extended period would mean. OpenAI specifically wants its models to use these capabilities to conduct research by browsing the web autonomously with the assistance of a "CUA," or a computer-using agent, that can take actions based on its findings, according to the document and one of the sources. OpenAI also plans to test its capabilities on doing the work of software and machine learning engineers.
[2]
OpenAI is reportedly working on more advanced AI models capable of reasoning and 'deep research'
The secret project is code-named 'Strawberry,' according to a Reuters report. A new report from claims OpenAI is developing technology to bring advanced reasoning capabilities to its AI models under a secret project code-named "Strawberry." Among the project's goals is to enable the company's AI models to autonomously scour the internet in order to "plan ahead" for more complex tasks, according to an internal document seen by Reuters. The project previously went by the name of Q* (pronounced "Q star"), demos of which showed earlier this year that it could answer "tricky science and math questions," Reuters reports, citing unnamed sources who witnessed the demonstrations. At this stage, much remains unknown about Strawberry -- including how far along in development it is, and whether it's the same system with "human-like reasoning" skills that OpenAI reportedly demonstrated at an employee all-hands meeting earlier this week, per . But the ability for the company's AI to conduct "deep research," as is said to be the aim of Strawberry, would mark a huge leap forward from what's available today.
[3]
OpenAI working on new reasoning technology under code name 'Strawberry'
ChatGPT maker OpenAI is working on a novel approach to its artificial intelligence models in a project code-named "Strawberry," according to a person familiar with the matter and internal documentation reviewed by Reuters. The project, details of which have not been previously reported, comes as the Microsoft-backed startup races to show that the types of models it offers are capable of delivering advanced reasoning capabilities. Teams inside OpenAI are working on Strawberry, according to a copy of a recent internal OpenAI document seen by Reuters in May. Reuters could not ascertain the precise date of the document, which details a plan for how OpenAI intends to use Strawberry to perform research. The source described the plan to Reuters as a work in progress. The news agency could not establish how close Strawberry is to being publicly available. How Strawberry works is a tightly kept secret even within OpenAI, the person said. (For top technology news of the day, subscribe to our tech newsletter Today's Cache) The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers. Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time." The spokesperson did not directly address questions about Strawberry. The Strawberry project was formerly known as Q*, which Reuters reported last year was already seen inside the company as a breakthrough. Two sources described viewing earlier this year what OpenAI staffers told them were Q* demos, capable of answering tricky science and math questions out of reach of today's commercially-available models. On Tuesday at an internal all-hands meeting, OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills, according to Bloomberg. An OpenAI spokesperson confirmed the meeting but declined to give details of the contents. Reuters could not determine if the project demonstrated was Strawberry. OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets. Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence. While large language models can already summarize dense texts and compose elegant prose far more quickly than any human, the technology often falls short on common sense problems whose solutions seem intuitive to people, like recognizing logical fallacies and playing tic-tac-toe. When the model encounters these kinds of problems, it often "hallucinates" bogus information. AI researchers interviewed by Reuters generally agree that reasoning, in the context of AI, involves the formation of a model that enables AI to plan ahead, reflect how the physical world functions, and work through challenging multi-step problems reliably. Improving reasoning in AI models is seen as the key to unlocking the ability for the models to do everything from making major scientific discoveries to planning and building new software applications. OpenAI CEO Sam Altman said earlier this year that in AI "the most important areas of progress will be around reasoning ability." Other companies like Google, Meta and Microsoft are likewise experimenting with different techniques to improve reasoning in AI models, as are most academic labs that perform AI research. Researchers differ, however, on whether large language models (LLMs) are capable of incorporating ideas and long-term planning into how they do prediction. For instance, one of the pioneers of modern AI, Yann LeCun, who works at Meta, has frequently said that LLMs are not capable of humanlike reasoning. AI challenges Strawberry is a key component of OpenAI's plan to overcome those challenges, the source familiar with the matter said. The document seen by Reuters described what Strawberry aims to enable, but not how. In recent months, the company has privately been signaling to developers and other outside parties that it is on the cusp of releasing technology with significantly more advanced reasoning capabilities, according to four people who have heard the company's pitches. They declined to be identified because they are not authorized to speak about private matters. Strawberry includes a specialized way of what is known as "post-training" OpenAI's generative AI models, or adapting the base models to hone their performance in specific ways after they have already been "trained" on reams of generalized data, one of the sources said. The post-training phase of developing a model involves methods like "fine-tuning," a process used on nearly all language models today that comes in many flavors, such as having humans give feedback to the model based on its responses and feeding it examples of good and bad answers. Strawberry has similarities to a method developed at Stanford in 2022 called "Self-Taught Reasoner" or "STaR", one of the sources with knowledge of the matter said. STaR enables AI models to "bootstrap" themselves into higher intelligence levels via iteratively creating their own training data, and in theory could be used to get language models to transcend human-level intelligence, one of its creators, Stanford professor Noah Goodman, told Reuters. "I think that is both exciting and terrifying...if things keep going in that direction we have some serious things to think about as humans," Goodman said. Goodman is not affiliated with OpenAI and is not familiar with Strawberry. Among the capabilities OpenAI is aiming Strawberry at is performing long-horizon tasks (LHT), the document says, referring to complex tasks that require a model to plan ahead and perform a series of actions over an extended period of time, the first source explained. To do so, OpenAI is creating, training and evaluating the models on what the company calls a "deep-research" dataset, according to the OpenAI internal documentation. Reuters was unable to determine what is in that dataset or how long an extended period would mean. OpenAI specifically wants its models to use these capabilities to conduct research by browsing the web autonomously with the assistance of a "CUA," or a computer-using agent, that can take actions based on its findings, according to the document and one of the sources. OpenAI also plans to test its capabilities on doing the work of software and machine learning engineers. Read Comments
[4]
ChatGPT maker OpenAI working on new reasoning technology 'Strawberry'
Reuters is an international news organisation owned by Thomson Reuters ChatGPT maker OpenAI is working on a novel approach to its artificial intelligence models in a project code-named "Strawberry," according to a person familiar with the matter and internal documentation reviewed by Reuters. The project, details of which have not been previously reported, comes as the Microsoft-backed startup races to show that the types of models it offers are capable of delivering advanced reasoning capabilities. Teams inside OpenAI are working on Strawberry, according to a copy of a recent internal OpenAI document seen by Reuters in May. Reuters could not ascertain the precise date of the document, which details a plan for how OpenAI intends to use Strawberry to perform research. The source described the plan to Reuters as a work in progress. The news agency could not establish how close Strawberry is to being publicly available. How Strawberry works is a tightly kept secret even within OpenAI, the person said. The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers. Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time." The spokesperson did not directly address questions about Strawberry. The Strawberry project was formerly known as Q*, which Reuters reported last year was already seen inside the company as a breakthrough. Two sources described viewing earlier this year what OpenAI staffers told them were Q* demos, capable of answering tricky science and math questions out of reach of today's commercially-available models. On Tuesday at an internal all-hands meeting, OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills, according to Bloomberg. An OpenAI spokesperson confirmed the meeting but declined to give details of the contents. Reuters could not determine if the project demonstrated was Strawberry. OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets. Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence. While large language models can already summarize dense texts and compose elegant prose far more quickly than any human, the technology often falls short on common sense problems whose solutions seem intuitive to people, like recognizing logical fallacies and playing tic-tac-toe. When the model encounters these kinds of problems, it often "hallucinates" bogus information. AI researchers interviewed by Reuters generally agree that reasoning, in the context of AI, involves the formation of a model that enables AI to plan ahead, reflect how the physical world functions, and work through challenging multi-step problems reliably. Improving reasoning in AI models is seen as the key to unlocking the ability for the models to do everything from making major scientific discoveries to planning and building new software applications. OpenAI CEO Sam Altman said earlier this year, opens new tab that in AI "the most important areas of progress will be around reasoning ability." Other companies like Google, Meta and Microsoft are likewise experimenting with different techniques to improve reasoning in AI models, as are most academic labs that perform AI research. Researchers differ, however, on whether large language models (LLMs) are capable of incorporating ideas and long-term planning into how they do prediction. For instance, one of the pioneers of modern AI, Yann LeCun, who works at Meta, has frequently said that LLMs are not capable of humanlike reasoning. AI CHALLENGES Strawberry is a key component of OpenAI's plan to overcome those challenges, the source familiar with the matter said. The document seen by Reuters described what Strawberry aims to enable, but not how. In recent months, the company has privately been signaling to developers and other outside parties that it is on the cusp of releasing technology with significantly more advanced reasoning capabilities, according to four people who have heard the company's pitches. They declined to be identified because they are not authorized to speak about private matters. Strawberry includes a specialized way of what is known as "post-training" OpenAI's generative AI models, or adapting the base models to hone their performance in specific ways after they have already been "trained" on reams of generalized data, one of the sources said. The post-training phase of developing a model involves methods like "fine-tuning," a process used on nearly all language models today that comes in many flavors, such as having humans give feedback to the model based on its responses and feeding it examples of good and bad answers. Strawberry has similarities to a method developed at Stanford in 2022 called "Self-Taught Reasoner" or "STaR", one of the sources with knowledge of the matter said. STaR enables AI models to "bootstrap" themselves into higher intelligence levels via iteratively creating their own training data, and in theory could be used to get language models to transcend human-level intelligence, one of its creators, Stanford professor Noah Goodman, told Reuters. "I think that is both exciting and terrifying...if things keep going in that direction we have some serious things to think about as humans," Goodman said. Goodman is not affiliated with OpenAI and is not familiar with Strawberry. Among the capabilities OpenAI is aiming Strawberry at is performing long-horizon tasks (LHT), the document says, referring to complex tasks that require a model to plan ahead and perform a series of actions over an extended period of time, the first source explained. To do so, OpenAI is creating, training and evaluating the models on what the company calls a "deep-research" dataset, according to the OpenAI internal documentation. Reuters was unable to determine what is in that dataset or how long an extended period would mean. OpenAI specifically wants its models to use these capabilities to conduct research by browsing the web autonomously with the assistance of a "CUA," or a computer-using agent, that can take actions based on its findings, according to the document and one of the sources. OpenAI also plans to test its capabilities on doing the work of software and machine learning engineers.
[5]
Exclusive-OpenAI working on new reasoning technology under code name 'Strawberry'
Teams inside OpenAI are working on Strawberry, according to a copy of a recent internal OpenAI document seen by Reuters in May. Reuters could not ascertain the precise date of the document, which details a plan for how OpenAI intends to use Strawberry to perform research. The source described the plan to Reuters as a work in progress. The news agency could not establish how close Strawberry is to being publicly available. How Strawberry works is a tightly kept secret even within OpenAI, the person said. The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers. Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time." The spokesperson did not directly address questions about Strawberry. The Strawberry project was formerly known as Q*, which Reuters reported last year was already seen inside the company as a breakthrough. Two sources described viewing earlier this year what OpenAI staffers told them were Q* demos, capable of answering tricky science and math questions out of reach of today's commercially-available models. On Tuesday at an internal all-hands meeting, OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills, according to Bloomberg. An OpenAI spokesperson confirmed the meeting but declined to give details of the contents. Reuters could not determine if the project demonstrated was Strawberry. OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets. Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence. While large language models can already summarize dense texts and compose elegant prose far more quickly than any human, the technology often falls short on common sense problems whose solutions seem intuitive to people, like recognizing logical fallacies and playing tic-tac-toe. When the model encounters these kinds of problems, it often "hallucinates" bogus information. AI researchers interviewed by Reuters generally agree that reasoning, in the context of AI, involves the formation of a model that enables AI to plan ahead, reflect how the physical world functions, and work through challenging multi-step problems reliably. Improving reasoning in AI models is seen as the key to unlocking the ability for the models to do everything from making major scientific discoveries to planning and building new software applications. OpenAI CEO Sam Altman said earlier this year that in AI "the most important areas of progress will be around reasoning ability." Other companies like Google, Meta and Microsoft are likewise experimenting with different techniques to improve reasoning in AI models, as are most academic labs that perform AI research. Researchers differ, however, on whether large language models (LLMs) are capable of incorporating ideas and long-term planning into how they do prediction. For instance, one of the pioneers of modern AI, Yann LeCun, who works at Meta, has frequently said that LLMs are not capable of humanlike reasoning. AI CHALLENGES Strawberry is a key component of OpenAI's plan to overcome those challenges, the source familiar with the matter said. The document seen by Reuters described what Strawberry aims to enable, but not how. In recent months, the company has privately been signaling to developers and other outside parties that it is on the cusp of releasing technology with significantly more advanced reasoning capabilities, according to four people who have heard the company's pitches. They declined to be identified because they are not authorized to speak about private matters. Strawberry includes a specialized way of what is known as "post-training" OpenAI's generative AI models, or adapting the base models to hone their performance in specific ways after they have already been "trained" on reams of generalized data, one of the sources said. The post-training phase of developing a model involves methods like "fine-tuning," a process used on nearly all language models today that comes in many flavors, such as having humans give feedback to the model based on its responses and feeding it examples of good and bad answers. Strawberry has similarities to a method developed at Stanford in 2022 called "Self-Taught Reasoner" or "STaR", one of the sources with knowledge of the matter said. STaR enables AI models to "bootstrap" themselves into higher intelligence levels via iteratively creating their own training data, and in theory could be used to get language models to transcend human-level intelligence, one of its creators, Stanford professor Noah Goodman, told Reuters. "I think that is both exciting and terrifying...if things keep going in that direction we have some serious things to think about as humans," Goodman said. Goodman is not affiliated with OpenAI and is not familiar with Strawberry. Among the capabilities OpenAI is aiming Strawberry at is performing long-horizon tasks (LHT), the document says, referring to complex tasks that require a model to plan ahead and perform a series of actions over an extended period of time, the first source explained. To do so, OpenAI is creating, training and evaluating the models on what the company calls a "deep-research" dataset, according to the OpenAI internal documentation. Reuters was unable to determine what is in that dataset or how long an extended period would mean. OpenAI specifically wants its models to use these capabilities to conduct research by browsing the web autonomously with the assistance of a "CUA," or a computer-using agent, that can take actions based on its findings, according to the document and one of the sources. OpenAI also plans to test its capabilities on doing the work of software and machine learning engineers. (Reporting by Anna Tong in San Francisco and Katie Paul in New York; editing by Ken Li and Claudia Parsons)
[6]
OpenAI Secretly Working on Project 'Strawberry' to Enhance Reasoning and Build Autonomous AI Agents
Project Strawberry was previously known as Project Q*, which was leaked last year and was capable of solving previously unseen math problems. OpenAI, the creator of ChatGPT, is reportedly working on a new AI technology under the code name "Strawberry." This project aims to significantly enhance the reasoning capabilities of its AI models, as revealed by internal documents and a source familiar with the development. The project's specifics, which have not been previously disclosed, involve a novel approach that allows AI models to plan ahead and navigate the internet autonomously to perform in-depth research. This advancement could address current limitations in AI reasoning, such as common sense problems and logical fallacies, which often lead to inaccurate outputs. Project Strawberry was previously known as Project Q*, which was leaked last year and was capable of solving previously unseen math problems. OpenAI's teams are working on Strawberry to improve the models' ability to perform long-horizon tasks (LHT), which require planning and executing a series of actions over an extended period. The project involves a specialised "post-training" phase, adapting the base models for enhanced performance. This method resembles Stanford's 2022 "Self-Taught Reasoner" (STaR), which enables AI to iteratively create its own training data to reach higher intelligence levels. A spokesperson from OpenAI acknowledged ongoing research into new AI capabilities but did not directly address the specifics of Strawberry. The internal document indicates that Strawberry includes a "deep-research" dataset to train and evaluate the models, although the contents of this dataset remain undisclosed. In recent months, OpenAI has privately hinted at releasing technology with advanced reasoning capabilities, aiming to overcome challenges in AI research and development. This innovation is expected to enable AI to conduct research autonomously, using a "computer-using agent" (CUA) to take actions based on its findings. Additionally, OpenAI plans to test Strawberry's capabilities in performing tasks typically done by software and machine learning engineers. OpenAI has recently unveiled a five-level classification system to track progress towards achieving artificial general intelligence (AGI) and superintelligent AI. OpenAI executives shared this classification system with employees during an internal meeting and plan to share it with investors and external parties. The company currently considers itself at Level 1 and anticipates reaching Level 2 in the near future. Other tech giants like Google, Meta, and Microsoft are also exploring techniques to enhance AI reasoning. However, experts like Meta's Yann LeCun argue that large language models may not yet be capable of human-like reasoning. OpenAI's CEO, Sam Altman, emphasised earlier this year that reasoning ability is crucial for AI progress. The Strawberry project could mark a significant step towards AI models achieving human or super-human-level intelligence, potentially revolutionizing how AI assists in scientific discoveries and software development.
[7]
ChatGPT maker OpenAI working secret technology code named 'Strawberry': What it is and more - Times of India
ChatGPT maker OpenAI is working on a project code-named "Strawberry," according to internal documentation reviewed by Reuters. While details have not been previously reported, the project reportedly aims to demonstrate advanced reasoning capabilities within the models offered by the Microsoft-backed startup. According to an exclusive Reuters report, teams within OpenAI are actively developing Strawberry, as outlined in a recent internal document from May.The exact timeline for public availability remains uncertain. Strawberry is a key component of OpenAI's plan to overcome those challenges, the source familiar with the matter said. The document seen by Reuters described what Strawberry aims to enable, but not how. How 'Strawberry' works is a "tightly kept secret" Strawberry's inner workings remain closely guarded secrets even within OpenAI. However, the project involves using specialized Strawberry models to enable the AI system not only to generate answers but also to autonomously navigate the internet for what OpenAI terms "deep research." How Strawberry works is a tightly kept secret even within OpenAI, the person said. This capability has eluded AI models thus far, according to interviews with AI researchers. OpenAI's spokesperson emphasized continuous research into new AI capabilities, with the belief that these systems will improve in reasoning over time. Previously known as Q* Strawberry was already seen as a breakthrough within the company. Earlier this year, OpenAI showcased Q* demos capable of answering complex science and math questions beyond the reach of commercially-available models. At an internal all-hands meeting, OpenAI presented a research project with new human-like reasoning skills. Although it remains unclear if this project was Strawberry, the company hopes that this innovation will significantly enhance its AI models' reasoning abilities. Strawberry involves specialized processing of pre-trained AI models using large datasets. "Exciting and terrifying" Strawberry has similarities to a method developed at Stanford in 2022 called "Self-Taught Reasoner" or "STaR", one of the sources with knowledge of the matter said, as per the report. STaR enables AI models to "bootstrap" themselves into higher intelligence levels via iteratively creating their own training data, and in theory could be used to get language models to transcend human-level intelligence, one of its creators, Stanford professor Noah Goodman, told Reuters. "I think that is both exciting and terrifying...if things keep going in that direction we have some serious things to think about as humans," Goodman said. Goodman is not affiliated with OpenAI and is not familiar with Strawberry. The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk's news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.
[8]
ChatGPT maker secretly developing new type of AI - Reuters -- RT World News
OpenAI, the creator of virtual assistant ChatGPT, is working on a novel approach to its artificial intelligence technology, Reuters has reported. As part of the project, code-named 'Strawberry,' the Microsoft-backed firm is trying to drastically improve the reasoning capabilities of its models, the agency said in an article on Friday. The way Strawberry works is "a tightly kept secret" even within OpenAI itself, a person familiar with the matter told Reuters. The source said the project involves a "specialized way" of processing an AI model after it has been pre-trained on extensive datasets. Its aim is to enable artificial intelligence to not just generate answers to queries, but to plan ahead sufficiently to conduct so-called "deep research," by navigating the internet autonomously and reliably, the source explained. Reuters said it had reviewed an internal OpenAI document, detailing a plan for how the US firm could deploy Strawberry to perform research. However, the agency said it was not able not establish when the technology will become available to the public. The source described the project as a "work in progress." When addressed on the issue, an OpenAI spokesperson told Reuters: "We want our AI models to see and understand the world more like we [humans] do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time." The spokesperson did not address Strawberry directly in his response. Current AI large language models are capable of summarizing vast amounts of text and putting together coherent prose quicker than people do, but usually struggle with common sense solutions that are intuitive to humans. When this happens, the models often "hallucinate" by trying to represent false or misleading information as facts. Researchers who talked to Reuters said that reasoning, which has so far eluded AI models, is key to artificial intelligence achieving human or super-human level. Last week, one of the world's leading experts in artificial intelligence and a pioneer in deep learning, Yoshua Bengio, again warned of the "many risks," including possible "extinction of humanity," posed by private corporations racing to achieve AI of human-level and beyond. "Entities that are smarter than humans and that have their own goals: are we sure they will act towards our well-being?" the Montreal University professor and scientific director of the Montreal Institute for Learning Algorithms (MILA) said in an article on his website. Bengio urged the scientific community and society as a whole to make "a massive collective effort" to figure out ways to keep advanced AI in check.
[9]
ChatGPT maker OpenAI developing new breakthrough reasoning technology code-named 'Strawberry'. Why is it important?
Sam Altman led OpenAI is working on a new reasoning technology for its large language models (LLMs) under the code-name strawberry, Reuters reported on Friday citing the company's internal documents and people familiar with the matter. The ChatGPT maker is reportedly hoping that Strawberry will improve the reasoning capabilities of its AI models dramatically. The report states that Strawberry is a tightly kept secret even within the organization. It was earlier known by the name Q* and was already seen as a breakthrough inside the company. Also Read | OpenAI unveils' Five-Tier' system to gauge AI progress towards human surpassing abilities: How it works However, OpenAI did show some staffers Q* demos which showed the LLMs capable of answering tricky science and math questions which are currently out of reach for the commercially available models today. The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. Strawberry will reportedly mark a specialized way of processing an AI models after it has been pre-trained on very large datasets. It includes a specialized way of 'post-training' OpenAI's generative AI models or adapting them in order to improve their performance in specific ways even after they have been 'trained' on generalized data. OpenAI reportedly wants to use Strawberry for performing long-horizon tasks (LHT), which require an AI model to plan ahead and perform a series of actions over an extended period of time. Specifically, OpenAI wants its models to use these capabilites for conducting research by browsing the web autonomusly with the support for 'computer using agent' or CUA which will be able to take action based on its findings. 3.6 Crore Indians visited in a single day choosing us as India's undisputed platform for General Election Results. Explore the latest updates here!
[10]
OpenAI working on new reasoning technology under code name 'Strawberry' - ET Telecom
The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source.
[11]
OpenAI's Q* Gets New Name, Project Strawberry: Report Says It Can Navigate Internet Autonomously With 'Deep Research' And Significantly Better Reasoning Capabilities - Microsoft (NASDAQ:MSFT)
OpenAI, the creator of ChatGPT, is reportedly working on a novel artificial intelligence project, codenamed "Strawberry." This project aims to significantly enhance the reasoning capabilities of AI models. What Happened: Microsoft Corp.-backed OpenAI had been previously rumored to be working on an AI project codenamed Q*, which was said to be a "breakthrough" as far as its capabilities were concerned. Internal OpenAI teams are currently developing Strawberry, according to an internal document reviewed by Reuters. The document reveals OpenAI's plans to use Strawberry for research purposes, although the exact timeline for the project's public release remains unclear. The project's goal is to enable OpenAI's AI to not only generate responses to queries but also to plan and navigate the internet autonomously to perform "deep research." This capability has been elusive for AI models to date. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Strawberry, previously known as Q*, is viewed as a breakthrough within the company. It is expected to improve the reasoning capabilities of AI models significantly, enabling them to plan ahead, reflect on how the physical world functions, and solve complex multi-step problems reliably. See Also: Former OpenAI Staffer Who Resigned Over Safety Issues Says Sam Altman's Team Is Building 'Titanic Of AI' OpenAI's CEO, Sam Altman, has previously stated that progress in AI will be primarily around reasoning ability. Other tech giants like Alphabet Inc.'s Google, Meta Platforms Inc., and Microsoft are also exploring techniques to enhance reasoning in AI models. OpenAI did not immediately respond to Benzinga's request for a statement. Why It Matters: Project Strawberry's development comes in the wake of a series of events that have shaped OpenAI's trajectory. In November 2023, Elon Musk jokingly referred to Q* as Q*Anon and expressed his concerns about the project's potential implications for artificial general intelligence, or AGI. During the same period, there were speculations about the safety concerns related to Q*, which was developing at a rapid pace. In May, Altman hinted at major developments in AI capabilities and a massive investment in AGI. These events underscore the significance of Project Strawberry and its potential to revolutionize AI reasoning capabilities. Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Apple's Most-Awaited Siri AI Upgrade Will Not Come Until 2025: Report Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock Market News and Data brought to you by Benzinga APIs
[12]
What Is Strawberry? Inside OpenAI Mysterious New AI Project
Discover the potential of OpenAI's Strawberry, an AI project aimed at enhancing deep research and human-like reasoning to revolutionize problem-solving. OpenAI has sparked enthusiasm in the AI industry with its newest project: Strawberry. Despite being surrounded by secrecy, this new AI project is creating a lot of excitement. Let's delve deeper into our understanding of Strawberry and explore why it could be a major development in artificial intelligence. According to Reuters report, Strawberry is the latest AI model created by OpenAI, aiming to expand the capabilities of artificial intelligence. At the heart of Strawberry's mission is to transform how AI engages with and understands the world. Its main purpose is to independently search and evaluate large quantities of data on the internet, conducting what OpenAI refers to as "deep research." The advanced research skills of Strawberry enable them to address challenging real-life issues on a level that was not possible before. Strawberry is ready to have a transformative impact. It aims to achieve this through groundbreaking scientific discoveries, innovative software applications, or solving complex global challenges. One of the main objectives of Strawberry is to improve AI's reasoning skills so that they become more similar to human cognitive processes. OpenAI's method includes thorough "post-training" examination of their current AI models, improving them to generate responses that are more sophisticated and human-like. This is a segment of OpenAI's larger goal to develop AI systems that can comprehend and engage with the world in ways that are becoming more similar to human thinking. Sam Altman, CEO of OpenAI, stresses that "improvements in reasoning ability will be the key areas of progress." Strawberry is a major advancement in this effort, seeking to close the divide between artificial intelligence and human understanding. Strawberry builds upon the progress achieved by OpenAI's previous endeavor, Q*, which was recognized as a significant milestone in AI innovation. Q* prepared the ground for the advancement of stronger and more advanced AI models. Strawberry takes advantage of this base to concentrate more on in-depth research and reasoning. Even though there is enthusiasm, information about Strawberry's internal mechanisms is kept secret. The details of Strawberry's operations are being kept confidential by OpenAI, sparking interest and conjecture in the AI community. The emergence of Strawberry occurs during a period of heightened regulatory examination and changes in the composition of OpenAI's board. Recently, prominent technology companies such as Microsoft and Apple reportedly stepped down from their positions on the board of OpenAI. They highlighted increasing concerns surrounding AI governance and regulation. However, OpenAI continues to dedicate itself to furthering its research. Besides these obstacles, OpenAI has established a significant collaboration with Los Alamos National Laboratory. This partnership seeks to investigate AI uses in bioscience studies, showcasing Strawberry's potential in non-traditional tech areas. As OpenAI progresses with the development of Strawberry, the potential influence of this AI project model is extensive. Its capacity for in-depth research and human-like reasoning has the potential to transform various fields, including scientific research and technological innovation. Even though the public release of Open AI Strawberry is not confirmed yet, its introduction represents a major achievement in the advancement of AI. The excitement and potential of Strawberry highlight the anticipation of a future where AI not only supports human capabilities. It also revolutionizes our approach to solving challenging issues. Strawberry represents a bold step forward in OpenAI's ongoing mission to advance artificial intelligence. As we await more information, one thing is clear: Strawberry has the potential to redefine the landscape of AI and its applications in the years to come.
[13]
Elon Musk Lauds AI Spotlighting OpenAI's Reasoning Tech Strawberry
In the wake of OpenAI's Strawberry project, Musk's remark have gained significant traction. The American entrepreneur Elon Musk's recent post on X has reflected a strong sense of confidence in the newly emerging tech, AI (artificial intelligence). Today, July 13, spotlighting OpenAI's new reasoning tech Strawberry, Musk outlined a stark contrast to how AI development and advancement was known to pose a threat to humanity. This statement soon captured noteworthy attention across the industry. Meanwhile, OpenAI has once again marked a monumental stride in its AI developmental endeavors. In his post, Elon Musk spotlights how the general mass was warned about the paper clip maximizer theory on AI. For context, it is a hypothetical theory by Nick Bostrom that focuses on how a seemingly harmless goal could lead to unintended consequences without AI's careful and mindful development. However, in his post, Musk hints AI is far from being a threat, claiming it to be "strawberry fields forever." In light of OpenAI's new developing reasoning tech, Strawberry, the American entrepreneur's statement has glimmered a sense of confidence in AI's potential for the near future. Also, the phrase 'strawberry fields forever' appears to allude to ideas of a utopian outcome with AI development, potentially inspired by the Beatles song. Meanwhile, it's also worth mentioning that the Strawberry Project, formerly known to be Q, is expected to be a phenomenal breakthrough for Sam Altman's AI firm. However, official announcements have yet to be made regarding when the model will be available for the public. Also read: Ripple Vs SEC: Lawyers Reflect On Settlement, Odds Of Judge Torres' Ruling Today The abovementioned endeavor further sets the level up for AI development across the globe. According to reports, the Strawberry model is dedicated to delivering advanced reasoning facilities. Sources familiar with the AI firm have stressed the secrecy of the project and its development, revealing no vital updates on the matter. Although a company spokesperson recently stated, "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time." In conclusion, Musk's recent remark on Strawberry has ignited optimism for the company's current and future endeavors. It's worth mentioning that Musk also has his own AI startup, xAI.
Share
Share
Copy Link
OpenAI, the creator of ChatGPT, is reportedly working on a new AI technology codenamed "Strawberry" that aims to enhance reasoning capabilities in artificial intelligence models. This development could potentially revolutionize AI's ability to perform complex tasks and conduct deep research.
OpenAI, the artificial intelligence research laboratory behind ChatGPT, is reportedly developing a groundbreaking AI technology under the codename "Strawberry." This project aims to significantly enhance the reasoning capabilities of AI models, potentially revolutionizing the field of artificial intelligence 1.
The "Strawberry" project is focused on creating more advanced AI models capable of sophisticated reasoning and conducting deep research. These enhancements could enable AI to tackle complex problems and perform tasks that require higher-level cognitive functions 2.
While specific details about the technology remain undisclosed, experts speculate that "Strawberry" could have far-reaching implications across various sectors. The improved reasoning abilities could enhance AI's performance in fields such as scientific research, data analysis, and decision-making processes 3.
This development follows OpenAI's pattern of pushing the boundaries of AI technology. The company has consistently introduced innovative AI models, with ChatGPT being one of their most notable recent successes. The "Strawberry" project demonstrates OpenAI's commitment to advancing AI capabilities beyond current limitations 4.
The news of OpenAI's "Strawberry" project has sparked interest within the tech industry. As AI continues to evolve rapidly, companies are racing to develop more sophisticated models. OpenAI's focus on enhancing reasoning capabilities could potentially give them a competitive edge in the AI market 5.
While the potential of "Strawberry" is exciting, it also raises questions about the ethical implications and potential risks associated with more advanced AI systems. As AI models become increasingly sophisticated, concerns about their impact on privacy, job displacement, and decision-making processes in critical areas may intensify 2.
The "Strawberry" project represents a significant step forward in AI development. If successful, it could pave the way for AI systems that can engage in more human-like reasoning, potentially bridging the gap between artificial and human intelligence in certain cognitive tasks 1.
Reference
[1]
[2]
OpenAI, the artificial intelligence research laboratory, is reportedly working on a new reasoning technology under the codename 'Strawberry'. This development aims to enhance AI's ability to solve complex problems and could potentially revolutionize the field of artificial intelligence.
11 Sources
11 Sources
OpenAI is set to launch Project Strawberry this fall, a next-generation AI model with enhanced logical reasoning capabilities. The project is expected to integrate with ChatGPT and potentially become ChatGPT-5.
5 Sources
5 Sources
OpenAI is set to release 'Strawberry', a new AI model for ChatGPT, within the next two weeks. This update aims to enhance ChatGPT's reasoning capabilities and text handling, potentially revolutionizing AI interactions.
17 Sources
17 Sources
OpenAI has launched its new Strawberry series of AI models, sparking discussions about advancements in AI reasoning and capabilities. The model's introduction has led to both excitement and concerns in the tech community.
11 Sources
11 Sources
OpenAI is reportedly preparing to launch its highly anticipated AI model, codenamed 'Strawberry', within the next two weeks. This release comes earlier than initially planned and is expected to showcase significant advancements in AI capabilities.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved