Curated by THEOUTPOST
On Fri, 31 Jan, 8:05 AM UTC
41 Sources
[1]
Google rolls out cheaper AI model as industry scrutinizes costs
The updated lineup now includes several Gemini variants tailored to different price points and performance levels. In addition to Flash-Lite, Google has made Gemini 2.0 Flash generally available after a developer preview in December and has begun testing an updated version of its flagship "Pro" model. The updated Gemini 2.0 Flash would be available via the Gemini API in Google AI Studio and Vertex AI, allowing developers to build production applications with 2.0 Flash. "The Flash series of models is popular with developers as a powerful workhorse model, optimal for high-volume, high-frequency tasks at scale and highly capable of multimodal reasoning across vast amounts of information with a context window of 1 million tokens," Google said. "2.0 Flash is now generally available to more people across our AI products, alongside improved performance in key benchmarks, with image generation and text-to-speech coming soon." Flash-Lite 2 features a 1-million token context window and supports multimodal input, similar to Flash 2, Google added. It can generate captions for around 40,000 unique images at a cost of less than a dollar on Google AI Studio's paid tier.
[2]
Google rolls out Gemini 2.0 Flash, Pro Experimental, and Flash-Lite for multimodal AI tasks
Google on Thursday announced the general availability of Gemini 2.0 Flash, an AI model optimized for high-volume, high-frequency tasks. Developers can now integrate this model into production applications via the Gemini API in Google AI Studio and Vertex AI. Additionally, Google introduced experimental models, including Gemini 2.0 Pro and Gemini 2.0 Flash-Lite, designed to improve coding performance, reasoning capabilities, and cost efficiency. First unveiled at Google I/O 2024, Gemini 2.0 Flash is built for multimodal reasoning across large datasets. It features a 1 million token context window, making it highly efficient for processing vast amounts of information. Google highlighted its growing adoption among developers for handling large-scale AI tasks. The model is now available to more users across Google's AI platforms, with enhancements in key benchmarks. Future updates will introduce image generation and text-to-speech capabilities. Google also launched an experimental version of Gemini 2.0 Pro, aimed at advanced coding and complex problem-solving. "It has the strongest coding performance and ability to handle complex prompts among all models we've released so far," said Sundar Pichai, CEO of Google and Alphabet. This model features a 2 million token context window, allowing it to process and analyze extensive datasets. It also integrates with Google Search and code execution tools, enhancing its reasoning and problem-solving abilities. "This extended context window and tool integration make it our most capable model yet," noted Koray Kavukcuoglu, CTO of Google DeepMind. Starting today, Gemini app users can access Gemini 2.0 Flash Thinking Experimental, a model that breaks down prompts into sequential steps to improve reasoning. This model, available for free, enables users to see its thought process, assumptions, and decision-making logic. Additionally, a variant of 2.0 Flash Thinking will soon integrate with apps like YouTube, Google Search, and Google Maps, enhancing AI-powered assistance. Google introduced Gemini 2.0 Flash-Lite, an optimized model that delivers better performance than Gemini 1.5 Flash while maintaining the same speed and cost. Key features include: This model is now available in public preview on Google AI Studio and Vertex AI. As Gemini models become more advanced, Google continues to implement reinforcement learning techniques to enhance accuracy and response quality. Additionally, automated red teaming is used to identify and mitigate security risks, including threats from indirect prompt injection attacks, a cybersecurity concern where malicious instructions are embedded in retrieved data. Google has simplified pricing for Gemini 2.0 Flash and Flash-Lite, eliminating separate pricing tiers for short and long-context requests. This update makes both models more cost-effective than Gemini 1.5 Flash for mixed-context workloads, despite their improved capabilities. The Gemini 2.0 family is now available through Google AI Studio, Vertex AI, and the Gemini app: Google also announced plans to expand these models to Google Workspace Business and Enterprise customers in the near future.
[3]
Google Releases New Gemini 2.0 Models, Expands AI Capabilities for Developers and Users
Gemini 2.0 Flash, first introduced at I/O 2024, is now widely accessible via the Gemini API in Google AI Studio and Vertex AI. The model, known for its 1 million token context window, is optimised for high-volume tasks and multimodal reasoning. "We've been thrilled to see its reception by the developer community," said Koray Kavukcuoglu, CTO of Google DeepMind. The update also brings improved benchmarks, with image generation and text-to-speech capabilities expected soon. Alongside Gemini 2.0 Flash, Google has launched Gemini 2.0 Pro Experimental, designed for coding performance and complex prompts. This model features a 2 million token context window, the largest in the Gemini lineup, and supports tools like Google Search and code execution. "It has the strongest coding performance and ability to handle complex prompts," Kavukcuoglu said. The experimental version is available in Google AI Studio, Vertex AI, and the Gemini app for advanced users. For cost-sensitive applications, Google introduced Gemini 2.0 Flash-Lite, a more efficient model that outperforms its predecessor, 1.5 Flash, on most benchmarks. Priced competitively, it can generate captions for 40,000 photos at a cost of less than $1 in Google AI Studio's paid tier. Google emphasised its commitment to safety, using reinforcement learning techniques and automated red teaming to address risks, including indirect prompt injection attacks. "We'll continue to invest in robust measures that enable safe and secure use," Kavukcuoglu stated. The 2.0 Flash Thinking Experimental model will also be available to Gemini app users, accessible via the model dropdown on both desktop and mobile devices. Pricing details and further information are available on the Google for Developers blog. In the coming months, the company plans to expand multimodal capabilities and improve the Gemini 2.0 family. Google recently updated its AI principles, removing explicit bans on using AI for weapons and surveillance. This change departs from its previous stance, which explicitly prohibited such uses since 2018. The updated policy now focuses on risk-benefit analysis, stating that AI development will proceed where "the overall likely benefits substantially exceed the foreseeable risks and downsides."
[4]
Google Expands Gemini 2.0 Lineup with New AI Models and Updates
Gemini 2.0 Flash-Lite is Google's most affordable AI model, now in public preview. Google has expanded its Gemini 2.0 model lineup, making its latest artificial intelligence (AI) models more widely available. The company announced the general availability of Gemini 2.0 Flash, an updated version of its high-performance, low-latency model, via the Gemini API in Google AI Studio and Vertex AI. Google launched its Gemini 2.0 AI model in December 2024, which the company had previously said was built for the agentic era. Additionally, Google also launched a new model with cheap pricing to compete with low-cost artificial intelligence models like that of DeepSeek. Also Read: Google Launches Gemini 2.0: A New AI Model for the Agentic Era "We're making the updated Gemini 2.0 Flash generally available via the Gemini API in Google AI Studio and Vertex AI. Developers can now build production applications with 2.0 Flash," said Koray Kavukcuoglu, CTO of Google DeepMind, on behalf of the Gemini team, in a blog post on Wednesday. First introduced at I/O 2024, the Flash series of models is a workhorse model capable of multimodal reasoning across vast amounts of information, with a context window of 1 million tokens. "2.0 Flash is now generally available to more people across our AI products, alongside improved performance in key benchmarks, with image generation and text-to-speech coming soon," Google said. Also Read: Google Unveils WeatherNext, New AI Models for Weather Forecasts Additionally, Google introduced Gemini 2.0 Pro (Experimental), which the company says is its best model for coding and complex reasoning, featuring a 2 million token context window. This experimental model is now accessible to developers in Google AI Studio, Vertex AI, and Gemini Advanced users. The 2.0 Flash Thinking Experimental model, first updated earlier this year to combine high-speed processing with complex reasoning, is now rolling out to Gemini app users on both desktop and mobile. Another major release is Gemini 2.0 Flash-Lite, which Google says is its most cost-efficient AI model to date. It maintains the speed and affordability of its predecessor, 1.5 Flash, while delivering improved performance. This model is now in public preview in Google AI Studio and Vertex AI, according to Google. Also Read: Corti Launches Specialised AI Models for Healthcare Google also reaffirmed its commitment to AI safety, highlighting new reinforcement learning techniques and automated red teaming to mitigate risks like indirect prompt injection. For example, Google highlighted that its Gemini 2.0 lineup was built with new reinforcement learning techniques that use Gemini itself to critique its responses. "This resulted in more accurate and targeted feedback and improved the model's ability to handle sensitive prompts," Google said.
[5]
Google makes Gemini 'thinking' model available on the app and launches Gemini 2.0 Pro
The Google Gemini app now includes a reasoning model that shows its "thought process." On Wednesday, the tech giant announced the availability of Gemini 2.0 Flash Thinking Experimental on Gemini's desktop and mobile versions. In addition to making its reasoning model, 2.0 Flash Thinking, more accessible to users, Google shared other announcements related to the Gemini family: the launch of Gemini 2.0 Pro, expanded access to Gemini 2.0 Flash in AI Studio and Vertex AI, and a low-cost model called Flash-Lite. Over at Google, it looks like business as usual, as the company stays on the course of advancing its AI strategy of incrementally adding more capabilities to its Gemini family. But the introduction of Gemini 2.0 Flash-Lite is interesting timing given DeepSeek's disruption last week. DeepSeek spooked the AI industry with its R1 model that's just as capable as competitor models, but made for a fraction of the cost, upending beliefs that more money equals better models. Google has previously released a cost-efficient model called 1.5 Flash, and CEO Sundar Pichai downplayed DeepSeek's impact on Google in its Q4 2024 earnings call on Tuesday, saying, "For us, it's always been obvious" that models could become more cost-efficient over time. That said, Google plans to spend $75 billion in capital expenditure, which is a huge uptick from $32.3 billion in 2023. Google's announcements today seem to cover all the bases of AI development: reasoning models, advanced models, and low-cost models. Announced in December, 2.0 Flash Thinking rivals OpenAI's o1 and o3-mini reasoning models in that it's capable of working through more complex problems and showing how it reasons through them. Previously, it debuted in AI Studio but now has broad availability in experimental mode as a dropdown option in the Gemini app. There's also a version that integrates with YouTube, Google Maps, and Google Search. With this version, the model might opt to search the web for answers, pull up relevant YouTube links, or query Google Maps instead of relying solely on its own training data. For example, if you ask 2.0 Flash Thinking without app integration, "How long would it take to walk to China?" it would rely on its own knowledge base to reason through the vagueness and various factors involved. But in the version with the aforementioned apps, it immediately goes to Google Maps for answers (as laid out in its "thinking.") The Gemini 2.0 Pro release is the most noteworthy announcement in terms of technical advancement. According to Google, 2.0 Pro is its most capable model, best for coding and handling complex tasks (not to be confused with 2.0 Flash Thinking, which can also handle complex problems but shows its work.) Gemini 2.0 Pro is available as an experimental version to Gemini Advanced subscribers and in AI Studio and Vertex AI. Gemini 2.0 Flash-Lite has a 1 million token context window and multimodal input and runs at the same speed and price as the earlier 1.5 Flash version, so it packs a punch for its size. Flash-Lite is available in preview mode on AI Studio and Vertex AI.
[6]
Google Just Launched Gemini 2.0 Flash and Pro for Users and Developers
Google has issued another round of significant AI model announcements, upgrading its Gemini offerings across the board to bring users and developers artificial intelligence engines that are, according to the company, more capable and reliable. In the wake of DeepSeek's rise and new OpenAI models, the pace of AI development isn't slowing down. First up, the Gemini 2.0 Flash model that appeared in December for a select few is now rolling out to everyone, so you'll see it in the Gemini apps on desktop and mobile (this actually began appearing last week, so you may have already used it). The Flash models are designed to be faster and more lightweight, without too many performance trade-offs. Google is also making a Gemini 2.0 Flash Thinking Experimental model available for all users to test. It's another "reasoning" model, like the ones we've seen in ChatGPT, where the AI displays its thinking as it goes -- with the intention of producing results that are more accurate and more transparent. There's also a version of this model appearing to all users with access to apps included: Google Search, Google Maps, and YouTube. It'll return real-time information from the web, as well as references to Google Maps data (including journey times and location details), and information pulled from YouTube videos. Lastly for the Flash models, Google is making Gemini 2.0 Flash-Lite available to developers. It's the most cost-efficient Gemini model yet -- which will appeal to those building tools with Gemini -- while still maintaining high levels of processing performance across a variety of multimodal inputs (text, images, and more). Next up, the even more capable Gemini 2.0 Pro Experimental model is here -- a little slower than the Flash equivalents, but better at thinking, writing, coding, and problem solving. This model is now appearing in experimental form for developers, and for any users who are paying $20 a month for Gemini Advanced. "It has the strongest coding performance and ability to handle complex prompts, with better understanding and reasoning of world knowledge, than any model we've released so far," says Google. It can also take in two million tokens per prompt, which averages out at about 1.4 million words -- roughly the Bible, twice. That's double the capacity of the 2.0 Flash models, and Google provided some benchmarks, too. In the general MMLU-Pro benchmark, we've got scores of 71.6 percent, 77.6 percent, and 79.1 percent respectively for Gemini 2.0 Flash-Lite, 2.0 Flash, and 2.0 Pro, compared to 67.3 percent for 1.5 Flash and 75.8 percent for 1.5 Pro. There are similar improvements across the board on other AI benchmarks, with Gemini 2.0 Pro Experimental hitting a score of 91.8 percent in a leading math test. That compares to 90.9 percent for 2.0 Flash, 86.8 percent for Flash-Lite, 86.5 percent for 1.5 Pro, and 77.9 percent for 1.5 Flash. As is the norm for AI model launches like this, details are thin on the training data used, hallucination risks and inaccuracies, and energy demands -- though Google does say the new Flash models are its most efficient yet, while all its latest models are better than ever at reasoning feedback and stopping potential safety and security hacks.
[7]
Google expands access to Gemini 2.0 AI models and debuts experimental versions - SiliconANGLE
Google expands access to Gemini 2.0 AI models and debuts experimental versions Google LLC today said it's expanding its Gemini artificial intelligence model family and increasing the availability of existing models. To start, Google is making an updated Gemini 2.0 Flash generally available in Google AI Studio and Vertex AI, the company's fully-managed machine learning development platform. This follows the company making 2.0 Flash available to all users in the Gemini app on desktop and mobile. As for experimental Gemini models, Google released an experimental version of Gemini 2.0 Pro, the company's flagship model with the best performance for coding and complex prompts, and announced 2.0 Flash Thinking Experimental for general availability. The new 2.0 Flash Thinking model represents a small, fast AI model optimized for logic and reasoning. Google also released a brand-new model, Gemini 2.0 Flash-Lite, designed to be the company's most cost-efficient AI model, into public preview. Google said by sharing early, experimental versions of Gemini 2.0 with developers and advanced users the company has received valuable feedback about the strengths of its AI models. With the release of the experimental version of Gemini 2.0 Pro, the company hopes to continue that trend. The experimental Gemini 2.0 Pro model comes with a context window of 2 million tokens, which allows it to ingest extensive documents and videos, or around 1.5 million words. It can also call tools such as Google Search and execute code. Gemini 2.0 Pro is the successor to Google's previous flagship Gemini 1.5 Pro model the company launched last February. Aimed at producing a model that does "deep thinking" by optimizing around reasoning, Google released 2.0 Flash Thinking Experimental in December. Chinese AI startup DeepSeek's open-source R1 reasoning model similarly does deep thinking but garnered much more attention from the media. Google built the new experimental model on the speed and performance of 2.0 Flash and trained it to break down prompts into a series of steps so that it essentially does its homework. "2.0 Flash Thinking Experimental shows its thought process so you can see why it responded in a certain way, what its assumptions were, and trace the model's line of reasoning," Patrick Kane, director of product management, Gemini app at Google said in the announcement. The company also said there will be a version of Flash Thinking that can interact with apps such as YouTube, Search and Google Maps. It will allow the reasoning model to behave as a helpful AI-powered assistant using its inherent reasoning capabilities. The new 2.0 Flash Thinking Experimental and 2.0 Pro Experimental will roll out to the Gemini web and mobile app today. The newest model in Google's Gemini family, 2.0 Flash-Lite follows up on maintaining the speed and price of Flash 1.5 while outperforming the model in a majority of quality benchmarks. Like Flash 2.0, Flash-Lite provides a 1 million token context window and multimodal input. As an example, Google said the new model could generate one-line captions for around 40,000 unique photos and it would cost less than a dollar in Google AI Studio's paid tier. This kind of speed and efficiency at scale, for such a low cost, is especially sought after by marketing and retail outfits. For marketers the model could help generate custom emails for clients at low cost and in retail it would be good for generating large numbers of text descriptions for product photos without breaking the bank. Gemini 2.0 Flash-Lite is rolling out to Google AI Studio and Vertex AI in public preview today.
[8]
Google is rolling out four new experimental Gemini AI models, including '2.0 Flash Thinking'
Summary Google has released several new Gemini models, including 2.0 Flash-Lite for cost efficiency, 2.0 Flash Thinking Experimental for transparent reasoning, and 2.0 Pro Experimental for advanced coding and complex prompts. Gemini 2.0 Flash-Lite outperforms the original Gemini 1.5 Flash in most benchmarks, offering strong performance at the same speed and cost. It's ideal for everyday tasks and is available in Google AI Studio and Vertex AI. Gemini app. 2.0 Pro Experimental is the top-of-the-line model for coding and complex prompts, featuring a massive 2 million token context window, accessible to Gemini Advanced users and developers. Roughly a week after making Gemini 2.0 Flash available to all as a stable model, Google is now rolling out several new Gemini models -- some designed for coding performance and complex prompts and the others for cost efficiency. As highlighted by the tech giant in a new blog post today, one of its new models is a lighter version of the Gemini model you would normally rely on for daily tasks, and it comes in the form of Gemini 2.0 Flash-Lite. Related Google's Gemini 2.0 Flash is now rolling out for everyone Snappy AI on web and mobile Posts 3 Even though titled 'Lite,' Google says that the new model outperforms the regular Gemini 1.5 Flash model in a majority of benchmarks, only lagging behind when it comes to understanding long context and for code generation in Python. It features the same 1 million token context window and multimodal input as the Gemini 2.0 Flash, albeit at the same speed and cost as 1.5 Flash. The new model is currently limited to Google AI Studio and Vertex AI in public preview. On the other hand, new models like Gemini 2.0 Flash Thinking Experimental and 2.0 Flash Thinking Experimental with apps are available to try out within the Gemini app on both desktop and mobile. Understand why Gemini said what it said The new models are described as "best for multi-step reasoning" and for "reasoning across YouTube, Maps & Search," respectively. The former is currently ranked as the world's best model on the Chatbot Arena LLM Leaderboard, and is absolutely free to use. The primary advantage of using 2.0 Flash Thinking Experimental and 2.0 Flash Thinking Experimental with apps is the models' ability to share their thought process. According to Google, "this model is trained to break down prompts into a series of steps to strengthen its reasoning capabilities...2.0 Flash Thinking Experimental shows its thought process so you can see why it responded in a certain way, what its assumptions were, and trace the model's line of reasoning." Although not immediately beneficial at first, knowing the reasoning behind why the AI model said what it said can offer a greater level of transparency, allowing users to put more trust in the model's output. Additionally, it can also help users understand the underlying logic behind the answer to a query, which can be a learning experience, as highlighted in the screenshots below. Close Lastly, the tech giant also unveiled its current top-of-the-line Gemini 2.0 Pro Experimental, which it says is the "best model yet for coding performance and complex prompts." Currently limited to Gemini Advanced users via the Gemini app on desktop and mobile, and to developers in Google AI Studio and Vertex AI, Gemini 2.0 Pro Experimental features an impressive 2 million token context window -- which roughly equates to 3,000 pages full of text. The large window allows it to "comprehensively analyze and understand" information, integrate results from Google Search, and execute code.
[9]
Google Rolling Out Gemini 2.0 Flash Thinking to Mobile Apps
Gemini 2.0 Flash is now available via API Gemini Advanced users are getting Gemini 2.0 Pro model Developers can now access Gemini 2.0 Flash-Lite AI model Google announced the availability of several Gemini 2.0 artificial intelligence (AI) models on Wednesday. The Mountain View-based tech giant is rolling out its Gemini 2.0 Flash Thinking Experimental AI model to mobile apps and the web client. Alongside this, an agentic version of the AI model is also being rolled out that can interact with certain apps. The company is also releasing an experimental version of Gemini 2.0 Pro to paid subscribers. Additionally, a lite version of the 2.0 Flash is also being rolled out as a public preview. In a blog post, the tech giant detailed all the different models currently being released to users. Some of these are available to the free users of Gemini, some only to the paid subscribers, and others are exclusively available to developers. The most notable among these is the Gemini 2.0 Flash Thinking, a reasoning-focused model comparable to the DeepSeek-R1 and OpenAI's o1 models. It was first released in December 2024, but so far it could only be accessed from Google's AI Studio. Now, the company is rolling out the model to all Gemini app and website users. The new AI model will be available via the model selector option at the top of the interface. Gadgets 360 staff members have not seen the model yet, however, it should be available globally in the coming days. Notably, it is unclear whether the free tier will face any rate limits for using the Thinking model. Alongside, the tech giant is also making an agentic version of the 2.0 Flash Thinking available to users. This model can interact with apps such as YouTube, Google Search, and Google Maps. With this integration, users should be able to ask Gemini to complete certain tasks on these apps. The extent of its ability is currently not known. For Gemini Advanced users, Google is rolling out an experimental version of Gemini 2.0 Pro, the high-performance frontier model of the 2.0 series. The model is said to be good at breaking down complex problems as well as in coding and mathematics-related tasks. This is the tech giant's most advanced model, with a context window of 2 million tokens. The application programming interface (API) of the model will also be able to call tools such as Google Search and code execution. It will also be available in Google AI Studio and Vertex AI. Developers will now be able to access Gemini 2.0 Flash-Lite. The company said it offers better performance than the 1.5 Flash while retaining the speed and cost. It has a context window of one million tokens and accepts multimodal input. In addition, Google is also rolling out the 2.0 Flash model to developers via the Gemini API. Currently, it can handle text-based tasks, and the company will add image generation and text-to-speech capabilities in the future. Both of these models will also be available in the Google AI Studio and Vertex AI.
[10]
Google launches new AI models and brings 'thinking' to Gemini | TechCrunch
Google launched its much-anticipated new flagship AI model, Gemini 2.0 Pro Experimental, on Wednesday. The announcement was part of a series of other AI model releases. The company is also making its "reasoning' model, Gemini 2.0 Flash Thinking, available in the Gemini app. Notably, Google is releasing these AI models as the tech world remains fixated on cheaper AI reasoning models offered by the Chinese AI startup DeepSeek. DeepSeek's models match or surpass the performance of leading AI models offered by American tech companies. At the same time, businesses can access DeepSeek's models through the company's API for a relative steal. Google and DeepSeek both released AI reasoning models in December, but DeepSeek's R1 got a lot more attention. Now, Google may be trying to put its Gemini 2.0 Flash Thinking model in front of more eyes through its popular Gemini app. As for Gemini 2.0 Pro, the successor to the Gemini 1.5 Pro model Google launched last February, Google says that it is now the leading model in its Gemini AI model family. Google accidentally announced the Gemini 2.0 Pro model's release in the Gemini app's changelog roughly a week ago. But this time, it's for real. The company is releasing an experimental version of the model on Wednesday in its AI development platforms, Vertex AI and Google AI Studio. Gemini 2.0 Pro will also be available to subscribers to Gemini Advanced in the Gemini app. Specifically, the new Gemini Pro model excels at coding and handling complex prompts, per Google, and it comes with "better understanding and reasoning of world knowledge" than any of the company's previous models. Gemini 2.0 Pro can call tools like Google Search, and execute code on behalf of users. Gemini 2.0 Pro's context window is 2 million tokens, meaning it can process about 1.5 million words in one go. Expressed another way, Google's newest AI model could ingest all seven books in the Harry Potter series in a single prompt and still have about 400,000 words left over. Google is also making its Gemini 2.0 Flash model generally available on Wednesday. This model was announced in December and is now available to all users of the Gemini app. Lastly, possibly to rival the excitement surrounding DeepSeek's models, Google is introducing a new, more cost-efficient AI model, Gemini 2.0 Flash-Lite. The company says this model outperforms its Gemini 1.5 Flash model, but runs at the same price and speed.
[11]
Google's Gemini 2.0 Flash AI Model Now Rolling Out to All Users
Google said Gemini 2.0 Flash outperforms 1.5 Pro in several benchmarks Google is rolling out the stable version of the Gemini 2.0 Flash to all users. Announced on Thursday, the artificial intelligence (AI) model will replace the experimental preview of 2.0 Flash which was first released in December 2024. The new AI model can be accessed in both the web client and the mobile apps by all users. When the tech giant first announced the Gemini 2.0 family of AI models, it said the new models would offer improved capabilities including native support for image generation and audio generation. In a blog post, the Mountain View-based tech giant announced that the Gemini 2.0 Flash AI model is now coming to the Gemini app. Earlier it was only available in the web version. While Gemini Advanced subscribers will get access to the new generation model, free users will also be able to access the model. Currently, it is not known whether there's a rate limit for the free tier. Google announced that Gemini Advanced users will also get access to the one million token context window with the AI model. The company says this is enough to process 1,500 pages of file uploads. Alongside, paid subscribers will also get access to premium features such as Deep Research, Gems, and more. The tech giant said that the Gemini 2.0 Flash supports multimodal output such as image generation with text and steerable text-to-speech (TTS) multilingual audio. Further, the AI model is also equipped with agentic functions. 2.0 Flash natively calls tools like Google Search, code execution-related tools, as well as third-party functions once a user defines them via the API. On performance, Google shared the AI model's benchmark scores based on internal testing. It is said to outperform Gemini 1.5 Pro on the Massive Multitask Language Understanding (MMLU), Natural2Code, MATH, and Graduate-Level Google-Proof Q&A (GPQA) benchmarks. Notably, image generation in Gemini now supports the latest version of the Imagen 3 AI model. Gemini 1.5 Flash (for free users) and 1.5 Pro (for Advanced users) will also remain available for the next few weeks. Google has added this extension to allow users enough time to finish existing conversations and move to the new AI model.
[12]
Google Launches Gemini 2.0 for Everyone. Here Are the Highlights
The flood of AI news from Google comes in the wake of the launch of DeepSeek, the breakthrough Chinese AI tool that's been making headlines. Google delivered big artificial intelligence news on Wednesday, launching its next-generation chatbot, Gemini 2.0. Google is opening up new Gemini 2.0 models in a multipronged initiative. The updated Gemini 2.0 Flash is now generally available via the Gemini API in Google AI Studio and Vertex AI, meaning developers can now build production applications with 2.0 Flash. This gives more people access to the smaller, more efficient version of Google Gemini following the big splash by DeepSeek, the generative AI tool that burst onto the scene in late January. Read more: What Is DeepSeek? Everything to Know About the New Chinese AI Tool Google is also countering Chinese AI startup DeepSeek with Gemini 2.0 Flash-Lite, a less expensive alternative that is now available in Google AI Studio and Vertex AI for developers. (By comparison, all versions of DeepSeek's AI offerings are free.) Google says Flash-Lite (get it?) is its most cost-efficient model yet. People with the Gemini app will also get to try out Google Gemini 2.0 Flash Thinking Experimental, a benchmark darling as the top AI model according to Chatbot Arena's leaderboards. This version of Gemini 2.0 Flash is special, thanks to its ability to break down user prompts into a series of steps to help provide better responses to complex prompts. For those looking for something more powerful, Google also added Gemini 2.0 Pro Experimental to the Gemini app for Gemini Advanced users, as well as Google AI Studio and Vertex AI. Flash Thinking can also interact with Google apps like Maps, YouTube and Search. "These connected apps already make the Gemini app a uniquely helpful AI-powered assistant, and we're exploring how new reasoning capabilities can combine with your apps to help you do even more," Patrick Kane, director of product management for the Gemini app at Google, wrote in a blog post. The wide release of Google Gemini 2.0 Pro Experimental, Flash-Lite and Flash Thinking is the latest in a string of AI announcements from Google. Late last month, the Samsung Galaxy S25 series became the first smartphones to ship with Google's Project Astra, an AI feature that lets users point their phone cameras at things to get information on them. The company also launched its Daily Listen feature, which generates audio summaries of news updates based on your Search and Discover activity. Google's news is a counterpoint to the hype about DeepSeek, which boasts similar performance to bigger AI models at a fraction of the price. It's too soon to say if DeepSeek will stick around for the long haul due to a variety of factors, but Google's announcement and OpenAI's o3-mini release may help pry attention away from the young AI upstart.
[13]
Google rolls out Gemini 2.0 for all users: What you need to know about latest AI models
Google launched Gemini 2.0, its most advanced suite of AI models, On February 5. The Gemini 2.0 Flash model is now available to all, while the Gemini 2.0 Pro model is currently being tested for Gemini Advanced subscribers. Google also introduced the Flash-Lite model, its most affordable AI option. In addition, the Gemini 2.0 Flash Thinking mode, which offers improved reasoning abilities, is being added to the Gemini app. This mode was first introduced in December. The Gemini 2.0 Flash model is now available to more users. It was first launched as an experimental model in December, and the stable version started rolling out last week. Google has made this model available in more of its AI products, including the Gemini mobile app. They also plan to add features like text-to-speech and image creation soon. In addition to the Gemini app, the 2.0 Flash model can be used through the Gemini API in Google AI Studio and Vertex AI, allowing developers to integrate it into their own tools. Also Read : How Donald Trump came up with the idea to 'take over' Gaza Google has launched an experimental version of Gemini 2.0 Pro, designed to handle complex prompts with improved reasoning and world knowledge. It offers the best coding performance among Gemini models. With a 2 million token context window, it can process large datasets efficiently and use tools like Google Search and code execution to boost its abilities. This model is available to Gemini Advanced subscribers and developers in Google AI Studio and Vertex AI. Google's new 2.0 Flash-Lite model is the most cost-efficient Gemini model so far. It offers the same speed and pricing as the 1.5 Flash model, with a 1 million token context window and multimodal input support, like the 2.0 Flash model. Gemini 2.0 Flash-Lite is now available in public preview through Google AI Studio and Vertex AI. Google is adding the Gemini 2.0 Flash Thinking Experimental mode to the Gemini app, now available in the model dropdown on both desktop and mobile. Previously only in Google AI Studio and Vertex AI, this mode improves reasoning by showing the AI's thought process. Also Read : OpenAI makes ChatGPT search available to all users, no sign-up required When solving complex problems, Thinking mode provides a step-by-step breakdown of how the AI approaches a question, making it easier for users to understand its reasoning. 1. What is Google Gemini 2.0? Google Gemini 2.0 is a suite of advanced artificial intelligence models developed by Google. It includes several different versions designed for various purposes, such as the Flash, Flash-Lite and Pro models. These models are built to handle complex tasks like problem-solving, reasoning, and processing large datasets. 2. What is the availability of Google Gemini 2.0? Users can begin using Gemini 2.0 right away, with additional updates to come as Google continues to improve the model.
[14]
Gemini 2.0 Models Arrive for All Users
We may earn a commission when you click links to retailers and purchase goods. More info. Google announced today that it is updating its Gemini 2.0 models and releasing them to the world for all users to try. The updates are for Gemini 2.0 Flash, 2.0 Pro Experimental, and 2.0 Flash-Lite. For 2.0 Flash, the Gemini team details that it is generally available to more people across Google's AI products, alongside improved performance in key benchmarks, "with image generation and text-to-speech coming soon." This is the model most users will be accessing via the Gemini app. Users and developers looking for something more cost efficient, Google has Gemini 2.0 Flash-Lite. We've gotten a lot of positive feedback on the price and speed of 1.5 Flash. We wanted to keep improving quality, while still maintaining cost and speed. So today, we're introducing 2.0 Flash-Lite, a new model that has better quality than 1.5 Flash, at the same speed and cost. It outperforms 1.5 Flash on the majority of benchmarks. The benchmarks referred to include GPQA, MMLU-Pro, LiveCodeBench, Bird-SQL, and a long list of others. Google posted benchmark numbers for the suite of Gemini 2.0 models and there are increases across the board for each model. Those looking for the absolute best figures should opt for Pro Experimental, which Google says, "has the strongest coding performance and ability to handle complex prompts, with better understanding and reasoning of world knowledge, than any model we've released so far." Gemini 2.0 Pro-Experimental is accessible via Google AI Studio and Vertex AI, and in the Gemini app for Gemini Advanced users. You can read Google's complete blog on the Gemini 2.0 rollout by following the link below.
[15]
Google releases Gemini 2.0 models: 'We kicked off the agentic era'
TL;DR: Google has launched Gemini 2.0, featuring models Gemini 2.0 Flash (high-speed multi-modal), Gemini 2.0 Pro (Experimental) (top-tier coding/reasoning), and Gemini 2.0 Flash-Lite (cost-efficient). In response to Deepseek's rising competition from DeepSeek, Google has officially released its latest AI model, Gemini 2.0, The launch was announced in Google's Blog detailing the expanded availability of Gemini 2.0 models, highlighting improvements in efficiency, coding performance, AI safety, and costs. The new offerings include Gemini 2.0 Flash, a high-speed multi-modal model (allowing voice, text, image, and video inputs) optimized for large-scale tasks. Gemini 2.0 Pro (Experimental), Google's most powerful coding and complex reasoning model. As well as Gemini 2.0 Flash-Lite, a cost-efficient AI model that iterates on the previous Gemini 1.5 Flash. (Credit: Shutterstock) The move comes in response to escalating competition from Chinese firm DeepSeek, who burst onto the scene with the market-shaking DeepSeek R1 model on January 20. Google are not the only firm to release new offerings in response to the Chinese competitor, with OpenAI quickly releasing their latest model o3-mini free of charge. Along with autonomous AI agents 'Operator' and 'Deep Research' in rapid succession. (Credit: OpenAI) The release of the Gemini 2.0 series included a range of benchmarks, highlighting the advancements in high-frequency tasks, coding, complex reasoning, and long-context understanding. Google also emphasized the release as the next phase in their AI evolution, writing: "Our best model yet for coding performance and complex prompts" "We kicked off the agentic era" While benchmarks provide a technical snapshot of the model's performance improvements, the real test is how they stack up in the hands of users. Gemini 2.0 Flash is available to Gemini Advanced users via the Gemini API in Google AI Studio, Vertex AI, and the Gemini app. You can also get your hands on Gemini 2.0 Pro (Experimental) and Gemini Flash-Lite through Google AI Studio and Vertex AI.
[16]
Google Gemini 2.0 is now free for users -- here's how to access it now
Google DeepMind today announced significant updates to its Gemini 2.0 AI model lineup, introducing new versions that offer enhanced performance as well as models that are accessible to users at no cost. These developments aim to make Google AI models more widely available in the wake of competitors like OpenAI and DeepSeek offering similar models for free. Addressing the need for affordable AI, Google has introduced Gemini 2.0 Flash-Lite. This model offers better quality than its predecessor, 1.5 Flash, while maintaining the same speed and cost-efficiency. It supports a 1 million token context window and multimodal input, making it suitable for high-volume, high-frequency tasks. For instance, it can generate relevant one-line captions for approximately 40,000 unique photos at a cost of less than a dollar in Google AI Studio's paid tier. Gemini 2.0 Flash-Lite is currently available in public preview in Google AI Studio and Vertex AI. The Gemini 2.0 Flash model is also now generally available through the Gemini API in Google AI Studio and Vertex AI. This model supports multimodal input with text output and features a context window of 1 million tokens, enabling efficient processing of extensive information. Upcoming enhancements include image generation and text-to-speech capabilities, further broadening its applicability. Not one to leave out developers, Google has also released an experimental version of Gemini 2.0 Pro. Designed for advanced coding performance and managing complex prompts, this model offers developers an expanded context window of 2 million tokens, allowing for comprehensive analysis of vast information. Additionally, it can utilize tools like Google Search and code execution to enhance its functionality. Developers can access this experimental model in Google AI Studio and Vertex AI, while Gemini Advanced users can explore it via the Gemini app on both desktop and mobile platforms. To access the latest models, simply log in to your Gemini AI account and use the drop-down menu in the lefthand corner. Google DeepMind has always emphasized its dedication to responsible AI development. The Gemini 2.0 lineup incorporates new reinforcement learning techniques, utilizing the model itself to critique its responses, leading to more accurate and targeted feedback. Automated red teaming is also employed to assess safety and security risks, including indirect prompt injection attacks. These developments highlight Google DeepMind's ongoing efforts to advance AI technology, making it more efficient, accessible, and safe for a broad spectrum of applications.
[17]
Gemini app adding 2.0 Pro and 2.0 Flash Thinking Experimental
Google is following the consumer launch of 2.0 Flash with new preview models that will be available to test in the Gemini app: 2.0 Pro Experimental and 2.0 Flash Thinking Experimental. In December, Google started testing 2.0 Experimental Advanced (gemini-exp-1206) alongside the Flash preview. Many assumed it would launch in the Pro family, and Google today released an updated model with Gemini 2.0 Pro Experimental. Google says 2.0 Pro Experimental is its "best model yet for coding performance and complex prompts." It also has "better understanding and reasoning of world knowledge, than any model we've released so far." For the Gemini API, there's a 2 million token context window that "enables it to comprehensively analyze and understand vast amounts of information." In Gemini Advanced today, you'll get 1 million like before. Gemini Advanced subscribers ($19.99 per month) will be able to preview 2.0 Pro Experimental on the web and mobile app. It's rolling out starting today to the Gemini app (already live on the web), and also available for developers (Google AI Studio + Vertex AI). Google debuted its first thinking model in December, and updated it last month in AI Studio. Gemini 2.0 Flash Thinking Experimental will be available to test in the Gemini app for free. Featuring the speed and performance of 2.0 Flash, Google says this "model is trained to break down prompts into a series of steps to strengthen its reasoning capabilities and deliver better responses." You can see that in real-time in the UI: 2.0 Flash Thinking Experimental shows its thought process so you can see why it responded in a certain way, what its assumptions were, and trace the model's line of reasoning. Meanwhile, Google is also making available a second version "2.0 Flash Thinking Experimental with apps" that can reason/"interact with apps like YouTube, Search and Google Maps." These connected apps already make the Gemini app a uniquely helpful AI-powered assistant, and we're exploring how new reasoning capabilities can combine with your apps to help you do even more. Gemini 2.0 Flash has hit general availability (GA) for developers building apps and features with Google's API. Pricing details are available here. Image generation and text-to-speech capabilities are "coming soon." The company also shared that the 2.0 family was "built with new reinforcement learning techniques that use Gemini itself to critique its responses." This resulted in more accurate and targeted feedback and improved the model's ability to handle sensitive prompts, in turn. Google today announced the cost-efficient Gemini 2.0 Flash-Lite model for developers. It is better than 1.5 Flash across a majority of benchmarks, while maintaining the speed and cost that devs have come to appreciate. It's available in public preview through Google AI Studio and Vertex AI.
[18]
Google's latest Gemini drop includes Pro access and Flash-Lite - here's what's new
Just last week, Google rolled out Gemini 2.0 Flash to all Gemini app users on mobile and desktop. Building on that momentum, Google is now expanding access and adding to its Gemini model offerings. On Wednesday, Google announced it is finally releasing an experimental version of Gemini 2.0 Pro. The model, which has Google's largest context window in a model of 2 million tokens, is the company's most robust and advanced model for coding and complex prompts. Also: You could win $1 million by asking Perplexity a question during the Super Bowl The large context window allows the model to analyze and reference a robust amount of information at once, improving overall performance and assistance by including additional context. It also enables the model to call on tools such as code execution, making it more suitable for a variety of tasks. Furthermore, Gemini 2.0 Pro, the successor to 1.5 Pro unveiled a year ago, outperformed the rest of Google's Gemini models on a series of benchmarks, including the MMLU-Pro, which tests for general capabilities; GPQA (diamond), which tests for reasoning; LiveCodeBench (v5), which tests for code generation in Python; and MATH, which tests for challenging math problems in algebra, geometry, pre-calculus, and more. Also: Google Gemini's lock screen update is a game-changer for my phone This model is available as an experimental offering for Gemini Advanced users in the drop-down toggle. To be Gemini Advanced user, you need to enroll in the Google One AI Premium plan that costs $20 per month. Of course, Google couldn't forget its developer user base, also offering it to developers via Google AI Studio and Vertex AI. Beyond that, Gemini 2.0 Flash, Google's model for faster responses and stronger performance ideal for high-volume and high-frequency tasks at scale, is becoming available in more Google products, including the Gemini API in Google AI Studio and Vertex AI, in addition to Gemini app access, which was announced last week. Also: Perplexity lets you try DeepSeek R1 - without the security risk Lastly, due to the positive feedback Google received on Gemini 2.0 Flash, it has now introduced a new model, 2.0 Flash-Lite. This model retains the 1 million token context window, multimodal input, speed, and cost of 2.0 Flash, while offering better quality, according to Google. This model is available in Google AI Studio and Vertex AI in public preview. Also: Gemini's Deep Research browses the web for you - try the Android app now for free Google addressed safety concerns, reassuring the public that the models were built using techniques designed to enable safe usage, such as new reinforcement learning techniques. The company also shared that the models went through automated red-teaming to assess security risks. This announcement follows Google's publication of its Responsible AI: Our 2024 report, published yesterday.
[19]
Gemini 2.0 is now available to everyone
Koray Kavukcuoglu CTO, Google DeepMind, on behalf of the Gemini team In December, we kicked off the agentic era by releasing an experimental version of Gemini 2.0 Flash -- our highly efficient workhorse model for developers with low latency and enhanced performance. Earlier this year, we updated 2.0 Flash Thinking Experimental in Google AI Studio, which improved its performance by combining Flash's speed with the ability to reason through more complex problems. And last week, we made an updated 2.0 Flash available to all users of the Gemini app on desktop and mobile, helping everyone discover new ways to create, interact and collaborate with Gemini. Today, we're making the updated Gemini 2.0 Flash generally available via the Gemini API in Google AI Studio and Vertex AI. Developers can now build production applications with 2.0 Flash. We're also releasing an experimental version of Gemini 2.0 Pro, our best model yet for coding performance and complex prompts. It is available in Google AI Studio and Vertex AI, and in the Gemini app for Gemini Advanced users. We're releasing a new model, Gemini 2.0 Flash-Lite, our most cost-efficient model yet, in public preview in Google AI Studio and Vertex AI. Finally, 2.0 Flash Thinking Experimental will be available to Gemini app users in the model dropdown on desktop and mobile. All of these models will feature multimodal input with text output on release, with more modalities ready for general availability in the coming months. More information, including specifics about pricing, can be found in the Google for Developers blog. Looking ahead, we're working on more updates and improved capabilities for the Gemini 2.0 family of models. First introduced at I/O 2024, the Flash series of models is popular with developers as a powerful workhorse model, optimal for high-volume, high-frequency tasks at scale and highly capable of multimodal reasoning across vast amounts of information with a context window of 1 million tokens. We've been thrilled to see its reception by the developer community. 2.0 Flash is now generally available to more people across our AI products, alongside improved performance in key benchmarks, with image generation and text-to-speech coming soon. Try Gemini 2.0 Flash in the Gemini app or the Gemini API in Google AI Studio and Vertex AI. Pricing details can be found in the Google for Developers blog. As we've continued to share early, experimental versions of Gemini 2.0 like Gemini-Exp-1206, we've gotten excellent feedback from developers about its strengths and best use cases, like coding. Today, we're releasing an experimental version of Gemini 2.0 Pro that responds to that feedback. It has the strongest coding performance and ability to handle complex prompts, with better understanding and reasoning of world knowledge, than any model we've released so far. It comes with our largest context window at 2 million tokens, which enables it to comprehensively analyze and understand vast amounts of information, as well as the ability to call tools like Google Search and code execution.
[20]
Google brings second-gen AI models to the Gemini mobile app
Earlier today, Google made a few notable AI announcements, at a time when the tech industry is peeling the layers of China's DeepSeek AI and the search giant is staring at anti-trust heat in China. The latest from Google is an experimental version of Gemini 2.0 Pro, which is claimed to be the company's latest and greatest so far. "It has the strongest coding performance and ability to handle complex prompts, with better understanding and reasoning of world knowledge, than any model we've released so far," says the company. This one raises the prompt context window to 2 million tokens, allowing it to ingest and comprehend massive inputs with ease. Recommended Videos On the more affordable side of things, Google is pushing the new 2.0 Flash-Lite model as a public preview. Focused on reduced costs and snappier performance, this one is now available in Google AI Studio and Vertex AI systems. Please enable Javascript to view this content What's new Gemini mobile app? For smartphone users, the Gemini app is now getting access to these AI upgrades. Starting today, the mobile application will let users pick between the new Gemini 2.0 Flash Thinking Experimental and Gemini 2.0 Pro Experimental models. Currently ranked as No. 1 on the Chatbot Arena LLM Leaderboard -- and ahead of OpenAI's ChatGPT-4o and DeepSeek R1 -- the Gemini 2.0 Flash Thinking Experimental model is a massive leap forward for a couple of reasons. First, it can work with data pulled from apps such as YouTube, Google Maps, and Search. Based on your queries, this Gemini model can cross-check information from within those platforms and offer relevant answers. Second, it comes with thinking and reasoning capabilities. To put it simply, you can see, in real-time, how this model breaks down your commands and puts together the information as a cohesive response. The result, as Google puts it, is improved explainability, speed, and performance. It allows text and image input with support for up to a million token window, while the knowledge cut-off boundary is set to June 2024. Next, we have the Gemini 2.0 Pro Experimental model, which is now available to folks who pay for a Gemini Advanced subscription. Google says this one is "exceptional at complex tasks," particularly at chores such as maths problem-solving and coding. This multi-modal AI model can also pull relevant data from Google Search, and combine it with enhanced world understanding chops to handle more demanding tasks. You can access these new Gemini 2.0 series models on the mobile app as well as the web dashboard.
[21]
Google Has Officially Launched Gemini 2.0 for Everyone
The flood of AI news from Google comes in the wake of the launch of DeepSeek, the breakthrough Chinese aI tool that's been making headlines. Google delivered big AI news on Wednesday relating to its Gemini 2.0 next generation chatbot. The company is opening up new Gemini 2.0 models in a multi-pronged change. Let's break it down. The company is releasing an experimental version of Gemini 2.0 Pro in Google AI Studio, Vertex AI, and in the Gemini app for Gemini Advanced users. The company calls this "our best model yet for coding performance and complex prompts." The updated Gemini 2.0 Flash is now generally available via the Gemini API in Google AI Studio and Vertex AI, meaning developers can now build production applications with 2.0 Flash. This gives more people access to the smaller, more efficient version of Google Gemini in the wake of DeepSeek's major splash. Read more: What Is DeepSeek? Everything to Know About the New Chinese AI Tool Google is also countering DeepSeek with Gemini 2.0 Flash-Lite, a less expensive alternative which is now available in Google AI Studio and Vertex AI for developers. Google says Flash-Lite (get it?) is its most cost-efficient model yet. People with the Gemini app will also get to try out Google Gemini 2.0 Flash Thinking Experimental, a benchmark darling as the top AI model according to Chatbot Arena's leaderboards. This version of Gemini 2.0 Flash is special thanks to its ability to break down user prompts into a series of steps to help provide better responses to complex prompts. For those looking for something more powerful, Google also added Gemini 2.0 Pro Experimental to the Gemini app for Gemini Advanced users as well as Google AI Studio and Vertex AI. "We are rolling out a version of 2.0 Flash Thinking that can interact with apps like YouTube, Search, and Google Maps," Patrick Kane, director of product management for the Gemini app at Google, wrote in a blog post. "These connected apps already make the Gemini app a uniquely helpful AI-powered assistant, and we're exploring how new reasoning capabilities can combine with your apps to help you do even more." The wide release of Google Gemini 2.0 Pro Experimental, Flash-Lite, and Flash Thinking is the latest in a long line of AI announcements from Google. Late last month, the Samsung Galaxy S25 series became the first smartphones to ship with Google's Project Astra, a feature where users can point their phone cameras at things and get AI input based on context. The company also launched its Daily Listen feature, which generates audio summaries of news updates based on your Search and Discover activity. For Google, it's a large string of rapid announcements to an industry currently riding high on DeepSeek, the Chinese AI startup that boasts similar performance to bigger AI models at a fraction of the price. It's too soon to say if DeepSeek will stick around for the long haul due to a variety of factors, but Google's announcement and OpenAI's o3-mini release may help pry attention away from the young AI upstart.
[22]
Google Gemini 2.0 is now open to all, boosts virtual agent push
Google has opened its latest AI model suite, Gemini 2.0, to the public, marking a significant step in its push toward advanced AI agents. The suite includes Gemini 2.0 Pro Experimental, designed for coding and complex tasks, and Gemini 2.0 Flash Thinking, now available in the Gemini app. Gemini 2.0 Pro Experimental is described as Google's most capable model yet, excelling in coding and handling intricate prompts. It boasts a context window of 2 million tokens, enabling it to process approximately 1.5 million words at once. The model can call tools like Google Search and execute code on behalf of users. Initially teased in the Gemini app's changelog last week, it is now accessible via Google's AI development platforms, Vertex AI and Google AI Studio, as well as to Gemini Advanced subscribers in the Gemini app. Gemini 2.0 Flash, introduced in December, is now generally available. Billed as a "workhorse model," it is optimized for high-volume, high-frequency tasks and costs developers 10 cents per million tokens for text, image, and video inputs. Additionally, Google unveiled Gemini 2.0 Flash-Lite, its most cost-efficient model, which matches the performance of its predecessor, Gemini 1.5 Flash, at the same price and speed. Flash-Lite costs 0.75 cents per million tokens. The release aligns with Google's broader strategy of advancing agentic AI -- models capable of performing complex, multistep tasks autonomously. In a December blog post, Google emphasized its focus on developing models that "understand more about the world around you, think multiple steps ahead, and take action on your behalf." Gemini 2.0 introduces new multimodal capabilities, including native image and audio output, as well as tool use, bringing Google closer to its vision of a universal assistant. This push places Google in direct competition with other tech giants and startups like Meta, Amazon, Microsoft, OpenAI, and Anthropic, all of which are investing heavily in agentic AI. Anthropic's AI agents, for instance, can navigate computers similarly to humans, completing tasks with tens or hundreds of steps. OpenAI recently released Operator, an agent capable of automating tasks such as vacation planning and grocery ordering, while Deep Research compiles complex reports for users. Google also launched its own Deep Research tool in December, which functions as a research assistant exploring topics and compiling detailed reports. CEO Sundar Pichai emphasized the importance of execution over being first, stating in a December strategy meeting, "I think that's what 2025 is all about." Google's releases come amid growing attention to DeepSeek, the Chinese AI startup whose models rival or surpass those of leading American companies. DeepSeek's R1 model gained significant traction due to its affordability and performance. To counter this, Google is making its Gemini 2.0 Flash Thinking model more accessible through the Gemini app, potentially aiming to draw greater attention to its offerings.
[23]
Google's 'most capable' AI model is now available to everyone
The tech giant said on Wednesday that users of its Gemini app can now try the 2.0 Flash Thinking Experimental AI model -- which ranks as the best model in the world on the community-driven Chatbot Arena. Gemini 2.0 Flash Thinking Experimental is trained to "strengthen its reasoning capabilities" by breaking down prompts step-by-step and showing users its "thought process" to understand how it came to its response. Through this process, users can see "what its assumptions were, and trace the model's line of reasoning," Google (GOOGL-7.44%) said. The company also released a version of 2.0 Flash Thinking that can work with other Google apps such as YouTube and Maps. Advanced subscribers to Gemini get priority access to the 2.0 suite of AI models, including an experimental version of Gemini 2.0 Pro starting on Wednesday. The Pro model is also available in AI Studio and Google's Vertex AI platform. In December, the tech giant introduced Gemini 2.0, which "will enable us to build new AI agents that bring us closer to our vision of a universal assistant," Google chief executive Sundar Pichai said. AI agents are software that can complete complex tasks autonomously for a user. Developers and testers were the first to get 2.0, while all Gemini users were given access to the Gemini 2.0 Flash experimental model. The Flash model was built off of Gemini 1.5 Flash, which Google launched in July as its fastest, most cost-efficient model. Google's new FlashLite model, which it also announced on Wednesday, is an improved version of 1.5 Flash but is priced similarly. Meanwhile, Google parent company Alphabet missed Wall Street's expectations for the fourth quarter despite "robust momentum across the business." Alphabet reported revenue of $96.5 billion for the fourth quarter -- a 12% increase year over year. The company also reported earnings of $2.15 per share -- up 31% from the previous year, and net income of $26.5 billion for the quarter ended in December.
[24]
Google Launches New Versions of Gemini, Including 'Thinking' Model
Google today announced updates to Gemini, the company's AI product that competes with OpenAI's ChatGPT, DeepSeek, and Apple Intelligence. Starting today, Gemini app users can access Google's 2.0 Flash Thinking Experimental model, which is trained to break down prompts into a series of steps to improve its reasoning capabilities. The new model shows its reasoning process, giving users insight into why it responds the way that it does. There is a version of 2.0 Flash Thinking that is able to interact with apps that include YouTube, Search, and Google Maps, with Google working to determine how the new reasoning capabilities can help users do more with Google apps. An experimental version of Gemini 2.0 Pro is also now available, and Google says it is the best model yet for coding performance and answering complex prompts. Gemini 2.0 Pro is available in Google AI Studio, Vertex AI, and the Gemini app for Gemini Advanced subscribers. A Gemini 2.0 Flash-Lite model is available for Google AI Studio as well, and Google says that it is the most cost-efficient model to date. Gemini 2.0 Flash Thinking Experimental and 2.0 Pro Experimental are rolling out to the Gemini web and mobile app. They can be selected in the Gemini dropdown menu when interacting with the AI.
[25]
Google opens its most powerful AI models to everyone, the next stage in its virtual agent push
Google on Wednesday released Gemini 2.0 -- its "most capable" artificial intelligence model suite yet -- to everyone. In December, the company gave access to developers and trusted testers, as well as wrapping some features into Google products, but this is a "general release," according to Google. The suite of models includes 2.0 Flash, which is billed as a "workhorse model, optimal for high-volume, high-frequency tasks at scale"; 2.0 Pro Experimental, which is largely focused on coding performance; and 2.0 Flash-Lite, which Google bills as its "most cost-efficient model yet." Gemini Flash costs developers 10 cents per million tokens for text, image and video inputs, while Flash-Lite, its more cost-effective version, costs .75 of a cent for the same. The continued releases are part of a broader strategy for Google of investing heavily into "AI agents" as the AI arms race heats up among tech giants and startups alike. Meta, Amazon, Microsoft, OpenAI and Anthropic are also moving toward agentic AI, or models that can complete complex multi-step tasks on a user's behalf, rather than a user having to walk them through every individual step.
[26]
Google Gemini's new model is the brainstorming AI partner you've been looking for
The app update also brings the Gemini Flash Pro and Flash-Lite models to the app. Google has dropped a major upgrade to the Gemini app with the release of the Gemini 2.0 Flash Thinking Experimental model, among others. This combines the speed of the original 2.0 model with improved reasoning abilities. So, it can think fast but will think things through before it speaks. For anyone who has ever wished their AI assistant could process more complex ideas without slowing its response time, this update is a promising step forward. Gemini 2.0 Flash was originally designed as a high-efficiency workhorse for those who wanted rapid AI responses without sacrificing too much in terms of accuracy. Earlier this year, Google updated it in AI Studio to enhance its ability to reason through tougher problems, calling it the Thinking Experimental. Now, it's being made widely available in the Gemini app for everyday users. Whether you're brainstorming a project, tackling a math problem, or just trying to figure out what to cook with the three random ingredients left in your fridge, Flash Thinking Experimental is ready to help. Beyond the Thinking Experimental, the Gemini app is getting additional models. The Gemini 2.0 Pro Experimental is an even more powerful one, albeit a somewhat more cumbersome version of Gemini. It's aimed at coding and handling complex prompts. It's already been available in Google AI Studio and Vertex AI. Now, you can get it in the Gemini app, too, but only if you subscribe to Gemini Advanced. With a context window of two million tokens, this model can simultaneously digest and process massive amounts of information, making it ideal for research, programming, or rather ridiculously complicated questions. The model can also utilize other Google tools like Search if necessary. Gemini is also augmenting its app with a slimmer model called Gemini 2.0 Flash-Lite. This model is built to improve on its predecessor, 1.5 Flash. It retains the speed that made the original Flash models popular while performing better on quality benchmarks. In a real-world example, Google says it can generate relevant captions for around 40,000 unique photos for less than a dollar, making it a potentially fantastic resource for content creators on a budget. Beyond just making AI faster or more affordable, Google is pushing for broader accessibility by ensuring all these models support multimodal input. Currently, the AI only produces text-based output, but additional capabilities are expected in the coming months. That means users will eventually be able to interact with Gemini in more ways, whether through voice, images, or other formats. What makes all of this particularly significant is how AI models like Gemini 2.0 are shaping the way people interact with technology. AI is no longer just a tool that spits out basic answers; it's evolving into something that can reason, assist in creative processes, and handle deeply complex requests. How people use the Gemini 2.0 Flash Thinking Experimental model and other updates could show a glimpse into the future of AI-assisted thinking. It continues Google's dream of incorporating Gemini into every aspect of your life by offering streamlined access to a relatively powerful yet lightweight AI model. Whether that means solving complex problems, generating code, or just having an AI that doesn't freeze up when asked something a little tricky, it's a step toward AI that feels less like a gimmick and more like a true assistant. With additional models catering to both high-performance and cost-conscious users, Google is likely hoping to have an answer for anyone's AI requests.
[27]
The Gemini AI app can now show its thinking
Jess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews. Google is bringing its experimental "reasoning" artificial intelligence model capable of explaining how it answers complex questions to the Gemini app. The Gemini 2.0 Flash Thinking update is part of a slew of Gemini 2.0 AI rollouts announced by Google today, including its latest Gemini 2.0 Pro flagship model. This comes as the search giant is expecting to invest $75 billion on expenditures like growing its monotonously named family of AI models this year. That's a considerable jump from the $32.3 billion on capital expenditures it spent in 2023, with Google now racing to keep up with AI competitors like OpenAI, Microsoft, Meta, and the Amazon-backed Anthropic. Gemini 2.0 Flash Thinking will be available in the model dropdown options on the desktop and mobile app starting today, alongside another version of the model that can "interact with apps like YouTube, Search, and Google Maps," according to Google. It was introduced in December 2024 and is expected to compete with other so-called reasoning AI models like OpenAI's o1 and DeepSeek's R1. These models work by breaking problems down into smaller, manageable steps, allowing them to "think" about prompts before offering a solution. The intended outcome is to achieve stronger and more accurate results, but this is often at the expense of taking longer to achieve them. Google is also releasing an experimental version of Gemini 2.0 Pro. According to leaks of the preview reported by TechCrunch, the successor to Gemini 1.5 Pro should provide "better factuality" and "stronger performance" for coding and mathematics-related tasks. Gemini 2.0 Pro is described as Google's "most capable model yet," and will be available to Advanced Gemini app users and people with access to Vertex AI and AI Studio. Gemini 2.0 Flash -- the latest version of Google's high-efficiency workhorse AI model -- is also now generally available to developers in AI Studio and Vertex AI following its rollout to Gemini's web and mobile apps last week. Lastly, Google is rounding off these updates with the introduction of a new low-cost model called 2.0 Flash-Lite, which the company says matches 1.5 Flash for speed and price while outperforming it "on the majority of benchmarks." Gemini 2.0 Flash-Lite is launching in public preview today on Google's AI Studio and Vertex AI.
[28]
Google's Gemini AI app is getting faster with Flash 2.0
Jay Peters is a news editor covering technology, gaming, and more. He joined The Verge in 2019 after nearly two years at Techmeme. Google announced Thursday that the Gemini app is getting its Gemini 2.0 Flash AI model. The upgraded model "delivers fast responses and stronger performance across a number of key benchmarks, providing everyday help with tasks like brainstorming, learning or writing," the company said in a post. The change is rolling out to Gemini's web and mobile apps and will be available to all users. Google also says that you'll still be able to use Gemini 1.5 Flash and 1.5 Pro for "the next few weeks." The company first introduced Gemini 2.0 in December and promised that it was "working quickly" to get it in its products. At the time, it launched an experimental version of Gemini Flash 2.0 to Gemini users. On Thursday, Google also said that Gemini's image generation capabilities now use the newest version of the company's Imagen 3 AI text-to-image generator. According to Google, the model "delivers richer details and textures" and "follows your instructions with greater accuracy."
[29]
Google launches Gemini 2.0 Pro, Flash-Lite and connects reasoning model Flash Thinking to YouTube, Maps and Search!
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Google's Gemini series of AI large language models (LLMs) started off rough nearly a year ago with some embarrassing incidents of image generation gone awry, but has steadily improved, and the company appears to be intent on making its second generation effort -- Gemini 2.0 -- the biggest and best yet for consumers and enterprises. Today, the company announced the general release of Gemini 2.0 Flash, the introduction of Gemini 2.0 Flash-Lite, and an experimental version of Gemini 2.0 Pro. These models, designed to support developers and businesses, are now accessible through Google AI Studio and Vertex AI, with Flash-Lite in public preview and Pro available for early testing. "All of these models will feature multimodal input with text output on release, with more modalities ready for general availability in the coming months," wrote Koray Kavukcuoglu, chief technology officer of Google DeepMind, in the company's announcement blog post -- showcasing an advantage Google is bringing to the table even as competitors such as DeepSeek and OpenAI continue to launch powerful rivals. Google plays to its multimodal strenghts Neither DeepSeek R1 nor OpenAI's new o3-mini model can accept multimodal inputs, that is, images and file uploads or attachments. While DeepSeek R1 can accept them on its website and mobile app chat, it performs optical character recognition (OCR) a more than 60 year old technology, to extract the text only from these uploads -- not actually understanding or analyzing any of the other features contained therein. However, both are a new class of "reasoning" models that deliberately take more time to think through answers and reflect on "chains-of-thought" and the correctness of their responses. That's opposed to typical LLMs like the Gemini 2.0 pro series, so the comparison between Gemini 2.0 and DeepSeek R1 and OpenAI o3 is a bit of an apples-to-oranges. But there was some news on the reasoning front today from Google, too: Google CEO Sundar Pichai took to the social network X to declare that the Google Gemini mobile app for iOS and Android has been updated with Google's own rival reasoning model Gemini 2.0 Flash Thinking, and that the model could be connected to Google's existing hit services Google Maps, YouTube, and Google Search, allowing for a whole new range of AI-powered research and interactions that simply can't be matched by upstarts without such services like DeepSeek and OpenAI. I tried it briefly on the Google Gemini iOS app on my iPhone while writing this piece, and it was impressive based on my initial queries, thinking through the commonalities of the top 10 most popular YouTube videos of the last month and also providing me a table of nearby doctors' offices and opening/closing hours, all within seconds. Designed for high-efficiency AI applications, it provides low-latency responses and supports large-scale multimodal reasoning. One major benefit over the competition is in its context window, or the number of tokens that the user can add in the form of a prompt and receive back in one back-and-forth interaction with an LLM-powered chatbot or application programming interface. While many leading models such as OpenAI's new o3-mini that debuted last week only support 200,000 or fewer tokens -- about the equivalent of a 400-500 page novel of information density -- Gemini 2.0 Flash supports 1 million, meaning it is is capable of handling vast amounts of information, making it particularly useful for high-frequency and large-scale tasks. Gemini 2.0 Flash-Lite arrives to bend the cost curve to the lowest yet Gemini 2.0 Flash-Lite, meanwhile, is an all-new large language model aimed at providing a cost-effective AI solution without compromising on quality. Google DeepMind states that Flash-Lite outperforms its full-size (larger parameter-count) predecessor, Gemini 1.5 Flash, on third-party benchmarks such as MMLU Pro (77.6% vs. 67.3%) and Bird SQL programming (57.4% vs. 45.6%), while maintaining the same pricing and speed. It also supports multimodal input and features a context window of 1 million tokens, similar to the full Flash model. Currently, Flash-Lite is available in public preview through Google AI Studio and Vertex AI, with general availability expected in the coming weeks. As shown in the table below, Gemini 2.0 Flash-Lite is priced at $0.075 per million tokens (input) and $0.30 per million tokens (output). Flash-Lite is positioned as a highly affordable option for developers, outperforming Gemini 1.5 Flash across most benchmarks while maintaining the same cost structure. Logan Kilpatrick highlighted the affordability and value of the models, stating: "Gemini 2.0 Flash is the best value prop of any LLM, it's time to build!" Indeed, compared to other leading traditional LLMs available via provider API such as OpenAI 4o-mini ($0.15/$0.6 per 1 million tokens in/out), Anthropic Claude ($0.8/$4! per 1M in/out), and even DeepSeek's traditional LLM V3 ($0.14/$0.28), in Gemini 2.0 Flash appears to be the best bang for the buck. Gemini 2.0 Pro arrives in experimental availability with 2-million token context window For users requiring more advanced AI capabilities, the Gemini 2.0 Pro (Experimental) model is now available for testing. Google DeepMind describes this as its strongest model for coding performance and handling complex prompts. It features a 2 million-token context window and improved reasoning capabilities, with the ability to integrate external tools like Google Search and code execution. Sam Witteveen, co-founder and CEO of Red Dragon AI and an external Google Developer Expert for Machine Learning, discussed the Pro model in a YouTube review. "The new Gemini 2.0 Pro model has a two-million-token context window, supports tools, code execution, function calling, and grounding with Google Search -- everything we had in Pro 1.5 but improved." He also noted Google's iterative approach to AI development: "One of the key differences in Google's strategy is that they release experimental versions of models before they go GA (generally accessible), allowing for rapid iteration based on feedback." Performance benchmarks further illustrate the capabilities of the Gemini 2.0 model family. Gemini 2.0 Pro, for instance, outperforms Flash and Flash-Lite across tasks like reasoning, multilingual understanding, and long-context processing. AI Safety and Future Developments Alongside these updates, Google DeepMind is implementing new safety and security measures for the Gemini 2.0 models. The company is leveraging reinforcement learning techniques to improve response accuracy, using AI to critique and refine its own outputs. Additionally, automated security testing is being used to identify vulnerabilities, including indirect prompt injection threats. Looking ahead, Google DeepMind plans to expand the capabilities of the Gemini 2.0 model family, with additional modalities beyond text expected to become generally available in the coming months. With these updates, Google is reinforcing its push into AI development, offering a range of models designed for efficiency, affordability, and advanced problem-solving, and answering the rise of DeepSeek with its own suite of models ranging from powerful to very powerful and extremely affordable to slightly less (but still considerably) affordable. Will it be enough to help Google eat into some of the enterprise AI market, which was once dominated by OpenAI and has now been upended by DeepSeek? We'll keep tracking and let you know!
[30]
Google's Gemini rolls out 'world's best' AI model, free of charge
The new experimental AI model is better at explaining its reasoning, leading to better answers. Google has now given its Gemini app access to a new AI model called Gemini 2.0 Flash Thinking Experimental. According to the announcement blog post, the company says this model is currently "ranked as the world's best model" and is available at no cost. The model is designed to answer complex questions by breaking them down into smaller steps and explaining its assumptions and reasoning, which should lead to more accurate and reliable answers. At the same time, Google is also releasing a version of 2.0 Flash Thinking that can interact with other Google apps, such as YouTube, Search, and Maps. The goal is for Gemini to further incorporate these apps to bolster its reasoning capabilities to help you do more. Furthermore, Google is rolling out an experimental version of 2.0 Pro to Gemini Advanced subscribers. Gemini 2.0 Pro Experimental is designed for complex tasks and is intended to be more factually accurate and perform better when it comes to coding and math. Both 2.0 Flash Thinking Experimental and 2.0 Pro Experimental models will be released for Gemini on both the web and mobile app.
[31]
Google upgrades Gemini with 2.0 Flash -- What's new?
Google has announced that its Gemini app is now utilizing the Gemini 2.0 Flash AI model, promising faster responses and improved performance for various tasks such as brainstorming, learning, and writing. The rollout is taking place on both web and mobile platforms, and all users will have access to the updated model. The company introduced Gemini 2.0 in December, indicating it was working quickly to implement this model into its products. An experimental version was made available to users previously, and now Gemini 2.0 Flash is set as the default for the app. In addition to the performance upgrades, Google has enhanced Gemini's image generation capabilities through the latest version of the Imagen 3 AI text-to-image generator. This new model is designed to deliver richer details and textures while following user instructions with greater accuracy. Video: Google Gemini Advanced users will continue to benefit from a 1M token context window for file uploads up to 1,500 pages, along with priority access to features like Deep Research and Gems. Gemini 1.5 Flash and 1.5 Pro models will still be available for users for the next few weeks for ongoing conversations. In a related update, users reported that references to the Gemini 2.0 Pro Experimental model were removed from changelogs. A Google spokesperson clarified that an outdated note was published in error. The Gemini 2.0 Pro Experimental model, which is available to Gemini Advanced users, is positioned as Google's leading AI model, reportedly offering better factuality and performance in coding and mathematics tasks. This experimental model aims to assist users with complex tasks, including generating specific programming codes and solving advanced mathematical problems. However, Google has noted that it is in an early preview stage and may exhibit unexpected behaviors. It will not have access to real-time information and lacks compatibility with certain features in the app.
[32]
Google quietly announces its next flagship AI model | TechCrunch
Google has quietly announced the launch of its next-gen flagship AI model, Gemini 2.0 Pro Experimental, in a changelog for the company's Gemini chatbot app. The launch of Gemini 2.0 Pro Experimental, the successor the Gemini 1.5 Pro model Google launched last February, comes as the tech world remains fixated on Chinese AI startup DeepSeek. DeepSeek's latest models, which are openly available for companies to download and use, match or best many leading models from American tech giants and AI companies. That's led to a reckoning in Silicon Valley -- and at the highest levels of the U.S. government. Gemini 2.0 Pro Experimental, which is available to Gemini Advanced users beginning Thursday, is now the leading model in Google's Gemini AI family, the company said. It should provide "better factuality" and "stronger performance" for coding and mathematics-related tasks. Gemini Advanced is a part of Google's Google One AI Premium paid plan, and also available through Google's Gemini for Google Workspace add-ons. "Whether you're tackling advanced coding challenges, like generating a specific program from scratch, or solving mathematical problems, like developing complex statistical models or quantum algorithms, 2.0 Pro Experimental will help you navigate even the most complex tasks with greater ease and accuracy," Google writes in the changelog. Google notes that Gemini 2.0 Pro Experimental is in "early preview," and can have "unexpected behaviors" -- and may make mistakes. Also, unlike other Gemini models available in the Gemini app, Gemini 2.0 Pro Experimental doesn't have access to real-time information, and isn't compatible with some of the app's features. "We believe in rapid iteration and bringing the best of Gemini to the world, and we want to give Gemini Advanced subscribers priority access to our latest AI innovations," Google continued in the changelog. "Your feedback helps us improve these models over time and learning from experimental launches informs how we release models more widely." Coinciding with the Gemini 2.0 Pro Experimental release, Google has brought the Gemini 2.0 Flash model it announced in December to the Gemini app for all users. It'll remain the default Gemini app model for the foreseeable future.
[33]
Google's Latest (and Fastest) Gemini Model Is Now Available to Everyone -- Even Free Users
You Don't Need to Pirate Software: Here's How I Find Affordable Alternatives As Google continues to refine its family of modern generative AI models, known collectively as Gemini, the company releases more advanced versions for free. The latest wide release is Gemini 2.0 Flash, Google's fastest and most effective model yet. Gemini 2.0 Flash Is Available to Free Users and Developers Exiting triumphantly from its experimental phase is Gemini 2.0 Flash, a "highly efficient workhorse model," as described by Google's The Keyword. Known for low latency and strong performance, Gemini 2.0 Flash is now popping up in a few new places: on the Gemini app, the Google AI Studio and Vertex AI. Gemini App On January 30th, 2025, Google added the model to the Gemini app, available to both paying and non-paying users. You can simply go to Gemini's web, desktop, or mobile app, and it will default to Gemini 2.0 Flash. You do still have the option to select Gemini 1.5 Flash, Google's previous model. I'm not sure why you would, but there could be something I'm missing as a basic user. While you don't need to pay for Gemini Advanced to use 2.0 Flash, a paid subscription will still get you extra perks. These premium offerings include access to more experimental models, the ability to upload lengthy documents for analysis, and more. You can check out the full list of benefits on Google's Gemini Advanced page. Google AI Studio and Vertex AI In addition to bringing it to everyday users, Google has now made Gemini 2.0 Flash available to developers by adding it to the Google AI Studio and Vertex AI platforms. Developers can build applications using 2.0 Flash on both of these platforms, which cater to different use cases. It's All in the 2.0 Flash Family While making Gemini 2.0 Flash more widely available, Google has also shared early versions of other models in the "2.0 family," including Gemini 2.0 Pro and Gemini 2.0 Flash-Lite. Additionally, Google has dropped Gemini 2.0 Flash Thinking Experimental on the Gemini app after it was previously only accessible via Google AI Studio. According to Google, these early models will start with text output and have additional modalities in the future: "All of these models will feature multimodal input with text output on release, with more modalities ready for general availability in the coming months." If you, like me, find it challenging to keep up with the various iterations of Gemini models, here's a quick breakdown of the newest releases: Model Status Available On Best For Gemini 2.0 Flash Generally Available Gemini app (all users), Google AI Studio, Vertex AI Fast responses and stronger performance compared to 1.5. Useful for everyday tasks like brainstorming, learning, writing, as well as for developers to build applications. Gemini 2.0 Flash-Lite Public Preview Google AI Studio, Vertex AI Cost-effective yet updated option for developers. It has "better quality than 1.5 Flash, at the same speed and cost." Gemini 2.0 Pro Experimental Gemini app (Advanced users), Google AI Studio, Vertex AI Google's "best model yet for coding performance and complex prompts." Gemini 2.0 Flash Thinking Experimental Experimental Gemini app (all users), Google AI Studio Combines "Flash's speed with the ability to reason through more complex problems." As someone who uses Gemini largely for writing support and occasional curiosity questions, I'm not sure if I'll truly feel the difference between 1.5 Flash and the new-and-improved 2.0 Flash. I do imagine those who leverage Gemini for more complex tasks and building will reap the rewards of the low-latency model. In any case, I'll be eager to see how 2.0 Flash handles image generation, which is "coming soon" along with text-to-speech, according to Google.
[34]
Gemini 2.0 opens for everyone, Google announces
Amid the AI race, Google took a significant step on Sunday by opening its most powerful AI model, Gemini 2.0, to the general public.As the AI war is getting heated up after the arrival of China's Deepseek, Google on Wednesday opened its most powerful AI model, Gemini 2.0, for everyone. Although Google claims that this is a "general release," the corporation granted developers and reliable testers access in December and integrated some capabilities into Google products. In December last year, Google launched an experimental version of Gemini 2.0 Flash, their incredibly effective workhorse model for developers with reduced latency and improved speed. Later this year, they revised the 2.0 Flash Thinking Experiment in Google AI Studio, which enhanced its performance by fusing the speed of Flash with the capacity to solve increasingly challenging issues. Now fast forward to February 5, when they released an upgraded Gemini 2.0 Flash version for all desktop and mobile Gemini app users, which enables them to explore new ways to create, engage, and work together with Gemini. Focusing on future endeavors, Google is making its most affordable model yet, the Gemini 2.0 Flash-Lite, available for public preview in Google AI Studio and Vertex AI. Also Read : Disney stock climbs following streaming price hike and lower subscriber loss These new releases are part of Google's broader strategy to make significant investments in "AI agents" as the AI competition between startups and tech firms heats up after the arrival of Deepseek. Additionally, companies like Meta, Amazon, Microsoft, OpenAI, and Anthropic are pursuing agentic AI or models that can carry out extensive multi step tasks without requiring a user to guide them through each step. All of these models will have text output and multimodal input when they are released, with additional modalities set to be made widely available in the upcoming months. More details, including their pricing details, are available on the Google for Developers blog. According to Google's blog post, the Gemini 2.0 lineup was developed using novel reinforcement learning methods that evaluate Gemini's own replies. In turn, this enhanced the model's capacity to manage delicate suggestions and produced more precise and focused feedback. Also Read : Alibaba, PDD Holdings shares drop after US Postal Service suspends shipments from China, Hong Kong Additionally, Google is using automatic red teaming to evaluate security and safety threats, such as those posed by indirect prompt injection threats, which are a kind of cyberattack in which hackers conceal malicious instructions in data that an AI system is expected to encounter. Although some developers and testers acquired its early access in December 2023, it was made available for everyone on February 5, 2025. Yes, some of the Google products currently incorporate a few Gemini 2.0 features, and more are anticipated in the upcoming months.
[35]
Google's Gemini 2.0 Models Are Arriving For Everyone
Last month, Google began testing out Gemini 2.0, which would make Google's AI chatbot more powerful and capable than ever. Now, the update is finally available for everyone. Google has officially began releasing the stable versions of Gemini 2.0 to the Gemini app, which means that the newer AI model's capabilities to all users, including those on the free tier. The preview version of both the Flash and Advanced models was rolled out to Gemini Advanced users so they could try it out. Gemini 2.0 Flash, positioned for "everyday tasks," is making its way to users now, while the Advanced model, which will presumably be dubbed "Gemini 2.0 Pro," will take a bit more to land. The 2.0 Advanced preview took a few days more to show up than the Flash preview, so we should see it sometime within the next week if it's ready for primetime. 1.5 Pro is already referred to as "previous model," so the release of 2.0 Pro should be imminent as well. Although file and Drive access remain exclusive to Gemini Advanced subscribers, the fact that the free version is getting image uploads is pretty outstanding. Google touts 2.0 Flash as delivering faster responses and stronger performance across various benchmarks, including code, factuality, math, and reasoning, compared to the previous 1.5 model, which it is replacing. Alongside the 2.0 Flash rollout and the likely 2.0 Pro release coming soon, Google is phasing out its earlier models, 1.5 Flash and 1.5 Pro, renaming their descriptions to "Previous model" in the app. These models will remain accessible for a limited time, letting you continue existing conversations if you need them. In a separate update, Google has upgraded the image generation capabilities within Gemini to Imagen 3. This update promises richer details, improved textures, and greater accuracy in following user instructions. The Gemini 2.0 Flash update is now rolling out to users globally on the web version. It should be available on the app as well eventually, but it's not showing up there for us just yet. Source: 9to5Google
[36]
Google's Gemini 2.0: Will it Beat DeepSeek?
In response to DeepSeek's growing presence, Google has introduced Gemini 2.0 Flash-Lite, a more affordable alternative, particularly for developers. Flash-Lite promises to deliver great value while maintaining robust AI performance. Thus positioning it as the company's most budget-friendly model to date. Google also released the Gemini 2.0 Flash Thinking Experimental model. This is a model made to break up user prompts into steps for optimal response, specifically for intricate queries. This version is made to be accessible through the use of the Gemini app and will make users interact with AI better. Also integrated with Google services such as , YouTube, and Search, the app offers a comprehensive AI-powered assistant experience.
[37]
Google's Gemini 2.0 Flash is now rolling out for everyone
Summary Gemini 2.0 Flash update rolls out for everyone including free-tier users. The new Gemini Flash 2.0 model offers improved responses for writing, learning, and brainstorming tasks. Advanced subscribers can access a variety of Gemini models for query handling, with a phased rollout for Flash 2.0. Google's AI chatbot rival to services like Meta AI and OpenAI's ChatGPT is Gemini, but it isn't as simple as pulling up a chat and querying away. Google offers a variety of different models to choose from, even if you aren't paying for a subscription to use advanced models. Of these, Flash is the one that strikes a balance between the performance of Pro and the compactness of Nano, and it just got an update. Related Gemini 2.0: The good, the bad, and the meh Gemini 2.0 brings useful new features and enhancements Posts 12 Google describes Flash as the Gemini model you would rely on for daily tasks, such as those instructions to Gemini which now replaces Google Assistant. The company says you can now use Gemini 2.0 Flash for conversations with the AI in the Gemini app for Android. The new model superseded Gemini 1.5, and analyst @MaxWeinbach on X (formerly Twitter) notes its performance is notably better than OpenAI's GPT-4o and Claude 3.6 Sonnet. Flash 2.0 was an experiment first released in December last year, but it is now in stable, so all users have access to it. However, if you're a Gemini Advanced subscriber, you can now choose between the following Gemini models for query handling: Flash 2.0 Flash 1.5 Pro 1.5 Pro 1.5 with Deep Research Experimental Advanced 2.0 New features for free Phased rollout might need users to wait a little Google confirms the older Gemini Flash 1.5 will be sticking around for the next few weeks, so users engaged in interactions with it can wrap up their work before switching to the new model. Specifically, the company says Flash 2.0 is snappier with responses for tasks such as writing, learning, and brainstorming. This might seem cryptic or too little to gauge what has changed, and you'd be right to believe that, because it seems to be an incremental update, and not a giant leap ahead. However, Gemini also runs the latest version of Imagen 3 now, for image generation needs, which is more accurate. Flash 2.0 is coming to the web and mobile, but we aren't seeing it on our devices at the time of writing, so it could be a phased rollout. In any case, Gemini updates are often linked to the Google app as well, so ensure you update the it as well as the Gemini app before checking if you have access to the new model. Thanks: Armando
[38]
Gemini 2.0 Flash is finally ready for prime time
Today Google graduates 2.0 Flash to full-blown status, ready for you to use in the Gemini app and on the web Keeping up with advancements in artificial intelligence is practically a full-time job, and this week basically everyone was obsessed with learning all they could about DeepSeek, the open-source model from China. While that sent stock markets into a tizzy, it's also exactly the sort of thing we're only going to see more of, as accessing powerful AI tools only becomes easier and easier by the day. Today Google shares its own latest progress along that line, as Gemini's Flash 2.0 model leaves its experimental phase.
[39]
Gemini 2.0 Flash hits stable, starts rolling out to Gemini app
After entering preview last month, Google is now rolling out the stable version of Gemini 2.0 Flash to the Gemini app. 2.0 Flash is the Gemini app model picker is described as being "For everyday tasks, plus more features." The descriptions for 1.5 Flash (previously "Get everyday help") and 1.5 Pro ("Tackle complex tasks") have been renamed to "Previous model." For Gemini Advanced subscribers, there are no changes to 1.5 Pro with Deep Research and 2.0 Experimental Advanced (which remains gemini-exp-1206). Google last month said Gemini 2.0 Flash outperforms 1.5 Pro across code, factuality, math, reasoning, and other benchmarks at twice the speed. This family is meant to be a "new AI model for the agentic era." As of Thursday afternoon (PT), it's slowly rolling out and not yet widely available. We're seeing it on both free and Gemini Advanced accounts on the web (but not the mobile app), with Google keeping 1.5 Flash around at the moment. Some are seeing a "New model added: Choose one that best fits your needs" banner that doesn't reveal 2.0 Flash when tapped. There's no official announcement from Google just yet, while the updated API for developers is not yet appearing.
[40]
'Gemini 2.0 Pro Experimental' mentioned, but is not here yet [U]
Update: Google has revised the Gemini release notes (to 2025.01.30) and removed the "2.0 Pro Experimental" announcement. In releasing 2.0 Flash today, it looks like a previous/earlier version (2025.01.28) of the changelog was used. Nevertheless, it seems like Google is continuing it's "Flash" and "Pro" strategy this generation. Back in December, the company started testing 2.0 Experimental Advanced (gemini-exp-1206). It became available on Android and iOS earlier this month. Many assumed it was Gemini 2.0 Pro and Google (briefly) confirmed that this afternoon. 2.0 Pro Experimental was described as Google's "leading model designed to be exceptional at complex tasks, providing better factuality and stronger performance for coding and math prompts. Google names use cases such as "tackling advanced coding challenges, like generating a specific program from scratch, or solving mathematical problems, like developing complex statistical models or quantum algorithms." ...2.0 Pro Experimental will help you navigate even the most complex tasks with greater ease and accuracy. Google mentioned Gemini 2.0 Pro Experimental alongside 2.0 Flash going stable in the 2025.01.28 Release updates, but it never hit the model picker before the updated changelog, which was advanced to 2025.01.30. It remains to be seen what the model number is. 1.5 Flash and 1.5 Pro will remain available for a few more weeks to let users continue existing conversations. As of Thursday afternoon (PT), the company has yet to make any developer-focused announcements around 2.0 Flash hitting general availability. Gemini Advanced requires Google One AI Premium, which is $19.99 per month.
[41]
Gemini
Google Launches New Versions of Gemini, Including 'Thinking' ModelGoogle today announced updates to Gemini, the company's AI product that competes with OpenAI's ChatGPT, DeepSeek, and Apple Intelligence. Starting today, Gemini app users can access Google's 2.0 Flash Thinking Experimental model, which is trained to break down prompts into a series of steps to improve its reasoning capabilities. The new model shows its reasoning process, giving users insight ...
Share
Share
Copy Link
Google introduces new Gemini 2.0 models, including Flash, Pro Experimental, and Flash-Lite, offering improved performance, expanded capabilities, and cost-effective options for developers and users across various AI tasks.
Google has announced a significant expansion of its Gemini 2.0 AI model lineup, introducing new variants and making existing models more widely available. This move aims to enhance AI capabilities for developers and users while addressing industry concerns about costs and performance 12.
Gemini 2.0 Flash, first introduced at Google I/O 2024, is now generally available through the Gemini API in Google AI Studio and Vertex AI 2. This model is optimized for high-volume, high-frequency tasks and features a 1 million token context window, making it highly efficient for processing vast amounts of information 3.
Key features of Gemini 2.0 Flash include:
Google has launched an experimental version of Gemini 2.0 Pro, designed for advanced coding and complex problem-solving 2. According to Sundar Pichai, CEO of Google and Alphabet, "It has the strongest coding performance and ability to handle complex prompts among all models we've released so far" 2.
Notable features include:
In response to industry scrutiny of AI costs, Google introduced Gemini 2.0 Flash-Lite, a more efficient model that outperforms its predecessor, Gemini 1.5 Flash, while maintaining the same speed and cost 14. This model is particularly suited for cost-sensitive applications and can generate captions for around 40,000 unique images at a cost of less than a dollar on Google AI Studio's paid tier 13.
Google is making these new models more accessible across its platforms:
Google emphasized its commitment to AI safety, implementing new reinforcement learning techniques and automated red teaming to address risks, including indirect prompt injection attacks 34. The company has also updated its AI principles, shifting focus to risk-benefit analysis for AI development 3.
The introduction of more cost-efficient models like Flash-Lite comes at a time when the AI industry is seeing disruption from companies like DeepSeek, which recently released a highly capable, low-cost model 5. Google's CEO, Sundar Pichai, downplayed the impact of such developments, stating that increasing cost-efficiency in AI models has "always been obvious" 5.
As the AI landscape continues to evolve, Google plans to expand multimodal capabilities and further improve the Gemini 2.0 family in the coming months 3. With these advancements, Google aims to maintain its competitive edge in the rapidly advancing field of artificial intelligence.
Reference
[2]
[3]
Analytics India Magazine
|Google Releases New Gemini 2.0 Models, Expands AI Capabilities for Developers and UsersGoogle has launched Gemini 1.5 Flash-8B, a smaller and faster version of its Gemini AI model, offering high performance at the lowest cost in the Gemini family. This new model is designed for efficiency and affordability in AI development.
3 Sources
3 Sources
Google has announced the release of new Gemini models, showcasing advancements in AI technology. These models promise improved performance and capabilities across various applications.
2 Sources
2 Sources
Google has announced significant updates to its AI offerings, including the integration of Gemini 1.5 into enterprise contact centers and new AI-powered features for Google Workspace. These advancements aim to revolutionize customer engagement and boost productivity in the workplace.
9 Sources
9 Sources
Google has introduced Gemini 1.5 Flash, a significant upgrade to its AI chatbot. This update brings faster and smarter responses to free-tier users across 230 countries, enhancing the AI's capabilities in various tasks.
5 Sources
5 Sources
Google introduces Gemini 2.0 Flash Thinking, an advanced AI model with enhanced reasoning capabilities, multimodal processing, and transparent decision-making, positioning it as a strong competitor in the AI landscape.
22 Sources
22 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved