The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 17 Jul, 4:02 PM UTC
4 Sources
[1]
Over 1.5 million developers use Gemini globally, India among the largest: Google Deepmind - ET Telecom
Internet 4 min read Over 1.5 million developers use Gemini globally, India among the largest: Google Deepmind Tech giant Google has now unveiled Gemma2, the next generation of its open models for AI innovation, released to all developers. It features improvements in performance along with built-in safety advancements. It's available in both 9 billion and 27 billion parameter sizes, optimised by Nvidia to run on next-gen GPUs. More than 1.5 million developers globally are using Gemini, Google's suite of multimodal artificial intelligence (AI) models, with India boasting one of the largest user bases on Google's developer platform, Google AI Studio, said Seshu Ajjarapu, senior director, Google DeepMind. He was in Bengaluru for Google I/O, the regional variant of the company's annual developer conference that was held on Tuesday. The two million token context window on Gemini 1.5 Pro, previously waitlisted at I/O, is now available to all developers in India, he told ET on Monday. The expansion allows Gemini to process and understand even more information in a single request, leading to more contextual and comprehensive results, Manish Gupta, director, Google DeepMind, told ET.
[2]
Over 1.5 million developers use Gemini globally, India one of the largest user bases, says Google Deepmind executive
Tech giant Google has now unveiled Gemma2, the next generation of its open models for AI innovation, released to all developers. It features improvements in performance along with built-in safety advancements. It's available in both 9 billion and 27 billion parameter sizes, optimised by Nvidia to run on next-gen GPUs.Over 1.5 million developers globally are already using Gemini, Google's suite of multimodal AI models, with India boasting of one of the largest user bases on the tech giant's developer platform Google AI Studio, Manish Gupta, director, Google DeepMind told ET on Monday. He is in the city for Google I/O, the regional variant of the company's annual developer conference being held on Tuesday. The two million token context window on Gemini 1.5 Pro, previously waitlisted at I/O, is now available to all developers in India, he said. This expansion allows Gemini to process and understand even more information in a single request, leading to more contextual and comprehensive results, Gupta told ET. Google has now released Gemma2, the next generation of open models for AI innovation to all developers. It features improvements in performance along with built-in safety advancements. It's available in both 9 billion and 27 billion parameter sizes, optimised by Nvidia to run on next-gen GPUs. "Today, more than 1.5 million developers globally use Gemini models across our tools. The fastest way to build with Gemini is through Google AI Studio, and India has one of the largest developer bases on Google AI Studio today," Gupta said. Gemma's tokeniser, which breaks down text into smaller units for AI processing, for building multilingual solutions that understand and respond to India's diverse languages, his has been demonstrated by Navarasa, a multilingual variant for Indian languages built on Gemma. The Google Deepmind India team has been focussed on enabling open source resources to help developers build language solutions for India, he said. "Through Project Vaani, in collaboration with the Indian Institute of Science (IISc), we have been capturing the diversity of India's spoken languages. We're thrilled to have completed phase 1, providing developers with over 14,000 hours of speech data across 58 languages, collected from 80,000 speakers in 80 districts. With our partners at IISc, we're now embarking on phase 2, expanding to cover all states in India spanning 160 districts," Gupta said. ET had reported on July 11 that under Bhashini, ministry of electronics and information technology's (MeitY) flagship effort in AI for Indic languages, Indian Institute of Science's (IISc) AI and Robotics Technology Park (ARTPARK) plans to open-source 16,000 hours of spontaneous speech from 80 districts as a part of Project Vaani in collaboration with Google. Building high-quality language models that accurately represent India's linguistic diversity can be a complex challenge, Gupta said. "That's why we're introducing IndicGenBench, a benchmark designed for Indian languages. Covering 29 languages, including many that have not been benchmarked before, IndicGenBench provides a resource to assess and fine-tune language models," he said. "We're also open-sourcing our CALM (Composition of Language Models) framework that allows developers to combine their specialised language models with Gemma models," he added. This enables the creation of more efficient, and nuanced solutions that cater to specific use cases and linguistic variations. For instance, if a developer is building a coding assistant in English, by composing with a Kannada specialist model in CALM, they may be able to offer coding assistance in Kannada as well. "We're going to be launching the Agricultural Landscape Understanding (ALU) Research API, a limited availability tool designed to make agricultural practices more data-driven and efficient" he said. Farmers face myriad challenges, from accessing subsidies and capital to improving yields and market access. The ALU API looks to address these issues by leveraging AI and remote sensing to map individual farm fields across India, with the potential to provide landscape insights at the farm field level. "Built on Google Cloud and our extensive research, including collaborations with the Anthro Krishi team and India's digital AgriStack, the use of ALU information is already being explored by early select partners like Ninjacart, Skymet, Team-Up, IIT Bombay, and the Government of India," Gupta said. "We're introducing Google Wallet APIs to simplify the integration of loyalty programmes, tickets, gift cards. For developers using Google Maps Platform, we're introducing India specific pricing that is up to 70% lower on most APIs to make it even easier to build location based solutions. Additionally, we're collaborating with the Open Network for Digital Commerce (ONDC), offering developers building for ONDC up to 90% off on select Google Maps Platform APIs," Gupta said.
[3]
Google Expands Gemini 1.5 Pro & Gemma 2 Access for Indian Developers
At Google I/O Connect, Bengaluru, the generative AI powerhouse expanded access to its multimodal AI model Gemini 1.5 Pro and family of open models Gemma 2 for Indian developers. The new 2 million token context window on Gemini 1.5 Pro, previously limited, is now accessible in India. With a capacity of one million tokens, users can analyse extensive data, including up to one hour of video or 11 hours of audio. Explaining the business use cases of the models and what Indian developers are looking for, Chen Goldberg, VP, GM, Google Cloud Runtimes told AIM today, "We're talking with them about how they can scale -- scale with their customers, their business, and their teams and also how they can run more efficiently. "Our customers in India are critical for us. We expect to see a lot of innovation in AI coming from the local market" she added. Additionally, the newly released Gemma 2 models, available with nine billion and 27 billion parameters, claim to offer improved performance and safety protocols. Optimised by NVIDIA, these models run efficiently on next-gen GPUs and a single TPU host in Vertex AI. Boosting India's GenAI Space The availability of Gemma in India is likely to be a big leap in the surge of foundational models in Indian languages. Many developers in India prefer Gemma over other open-source models like Meta's Llama for building Indic LLMs. Gemma's tokenizer is particularly effective for creating multilingual solutions, as demonstrated by Navarasa, a multilingual variant of Gemma for Indic languages. At Google I/O California, the company highlighted the success of this project, developed by Telugu LLM Labs founded by Ravi Theja Desetty and Ramsri Goutham Golla. It is accessible in 15 Indic languages. "In India, there are two main areas of focus. Firstly, addressing language-related issues. Secondly, involves large-scale transformations across various industries be it in customer engagement or addressing the broader needs of the Indian population," Subram Natarajan, director of customer engineering and field CTO at Google Cloud told AIM at the event, echoing similar thoughts. Previously, Vivek Raghavan, the co-founder of Sarvam AI also told AIM that Gemma's tokenizer gives it an advantage over Llama when it comes to Indic Languages. He explained that The tokenization tax for Indic languages means asking the same question in Hindi costs three times more tokens than in English and even more for languages like Odiya due to their underrepresentation in these models. Today, the company also unveiled IndicGenBench to evaluate the generative capabilities of Indic LLMs, covering 29 languages, including several Indian languages without existing benchmarks. Going ahead, the company will continue to focus on investments in the developer community and partnerships. "These are crucial for scaling our operations. We understand that these elements are essential not just for our success but for the broader public's benefit" concluded Natarajan.
[4]
'India is uniquely positioned to drive the next generation of AI innovation': Google DeepMind's Ajjarapu
In an interview on the sidelines of the Google I/O Connect held in Bengaluru on Wednesday, Ajjarapu reasoned that with its largest mobile-first population, micro-payment and digital payment models, a booming startup and developer ecosystem, and diverse language landscape, "India is uniquely positioned to drive the next generation of AI innovation." In India, Google works with the Ministry of Electronics and Information Technology's Startup Hub to train 10,000 startups in AI, expanding access to its artificial intelligence (AI) models like Gemini and Gemma (family of open models styled on Gemini tech), and introducing new language tools from Google DeepMind India, according to Ajjarapu. It supports "eligible AI startups" with up to $350,000 in Google Cloud credits "to invest in the cloud infrastructure and computational power essential for AI development and deployment." Karya, an AI data startup that empowers low-income communities, is "using Gemini (also Microsoft products) to design a no-code chatbot," while "Cropin (in which Google is an investor) is using Gemini to power its new real-time generative AI, agri-intelligent platform." Manu Chopra, co-founder and CEO of Karya, said he uses Gemini "to take Karya Platform global and enable low-income communities everywhere to build truly ethical and inclusive AI." Gemini has helped Cropin "build a more sustainable, food-secure future for the planet," according to Krishna Kumar, the startup's co-founder and CEO. Robotic startup Miko.ai "is using Google LLM as a part of its quality control mechanisms," says Ajjarapu. According to Sneh Vaswani, co-founder and CEO of Miko.ai, Gemini is the "key" to helping it "provide safe, reliable, and culturally appropriate interactions for children worldwide." With an eye on harnessing the power of AI for social good, Google plans to soon launch the Agricultural Landscape Understanding (ALU) Research API, an application programming interface to help farmers leverage AI and remote sensing to map farm fields across India, according to Ajjarapu. The solution is built on Google Cloud and on partnerships with the Anthro Krishi team and India's digital AgriStack. It is piloted by Ninjacart, Skymet, Team-Up, IIT Bombay, and the Government of India, he pointed out. "This is the first such model for India that will show you all field boundaries based on usage patterns, and show you other things like sources of water," he added. On local language datasets, Ajjarapu underscored that Project Vaani, in collaboration with the Indian Institute of Science (IISc), has completed Phase 1 -- over 14,000 hours of speech data across 58 languages from 80,000 speakers in 80 districts. The project plans to expand its coverage to all states of India, totaling 160 districts, in phase two. Project Vaani introduced IndicGenBench, a benchmarking tool tailored for Indian languages, which covers 29 languages. Additionally, Project Vaani is open-sourcing its CALM (Composition of Language Models) framework for developers to integrate specialised language models with Gemma models. For example, integrating a Kannada specialist model into an English coding assistant may help in offering coding assistance in Kannada as well. Google, which has Gemini Nano tailored for mobile devices, has introduced the Matformer framework, developed by the Google DeepMind team in India. According to Manish Gupta, director, Google, it allows developers to mix different sizes of Gemini models within a single platform. This approach optimises performance and resource efficiency, ensuring smoother, faster, and more accurate AI experiences directly on user devices. India-born Ajjarapu was part of Google's corporate development team that handled mergers and acquisitions when Google's parent Alphabet acquired UK-based AI company DeepMind in 2014. As a result, he got the opportunity to conduct the due diligence and lead the integration of DeepMind with Google. Ajjarapu, though, was not a researcher, and was unsure of meaningfully contributing to DeepMind's mission, which "at that time, was to solve intelligence." This prompted him to quit Google in 2017 after 11 years, and launch Lfyt's self-driving division. Two years later, Ajjarupu rejoined Google DeepMind as senior director, engineering and product. Last year, Alphabet merged the Brain team from Google Research and DeepMind into a single unit called Google DeepMind, and made Demis Hassabis its CEO. Jeff Dean, who reports to Sundar Pichai, CEO of Google and Alphabet, serves as chief scientist to both Google Research and Google DeepMind. While the latter unit focuses on research to power the next generation of products and services, Google Research deals with fundamental advances in computer science across areas such as algorithms and theory, privacy and security, quantum computing, health, climate and sustainability and responsible AI. Has this merger led to a more product-focused approach at the cost of research, as critics point out? Ajjarapu counters that Google was still training its Gemini foundation models when the units were merged in April 2023, after which it launched the Gemini models in December, followed by Gemini 1.5 Pro, "which has technical breakthroughs like a long context window (2 million tokens that covers about 1 hour of video, or 11 hours of audio, or 30,000 lines of code)." A context window is the amount of words, known as tokens, a language model can take as input when generating responses. "Today, more than 1.5 million developers globally use Gemini models across our tools. The fastest way to build with Gemini is through Google AI Studio, and India has one of the largest developer bases on Google AI Studio," he notes. Google Brain and DeepMind, according to Ajjarapu, were also collaborating "for many years before the merger". "We believe we built an AI super unit at Google DeepMind. We now have a foundational research unit, which Manish is a part of. Our team is part of that foundation research unit. We also have a GenAI research unit, focused on pushing generative models regardless of the technique -- be it large language models (LLMs) or diffusion models that gradually add noise (disturbances) to data (like an image) and then learn to reverse this process to generate new data," said Ajjarapu, who is part of the product unit and whose job is to "take the research and put it in Google products." Google also has a science team, which is primarily responsible for things like protein folding and discovering new materials. Protein folding refers to the problem of determining the structure if a protein from its sequence of amino acids alone. "There are many paradigms to go after AI development, and we feel like we're pretty well covered in all of them," he says. "We're now fully in our Gemini era, bringing the power of multimodality to everyone." And how does Google decide which research products and product ideas to prioritise and invest in? According to Ajjurupa, the company uses an approach called "match, incubate, and launch." Is there a problem that's ready to be solved with a technology that's readily available? That's the matching part. For instance, for graph neural nets, the map is a graph. So there is a match. However, even if there's a match, performance is not guaranteed when it comes to generative AI. "You have to iterate it," he says. The next step involves de-risking an existing technology or research breakthrough for the real world since not all of them are ready to be made into products. This phase is called incubation. The final stage is the launch. "That's the methodical approach that we follow. But given the changing nature of the world, and changing priorities, we try to be nimble," says Ajjarupu. Gupta, on his part, asks his research team to identify research problems that will have "some kind of a transformative impact on the world, which makes it worthy of being pursued, even if the problem is very hard or the chances of failure are very high." And how is Google DeepMind addressing ethical concerns around AI, especially biases and privacy? According to Gupta, the company has developed a framework to evaluate the societal impact of technology, created red teaming techniques, data sets and benchmarks, and shared them with the research community. He adds that his team contributed the SeeGULL dataset (benchmark to detect and mitigate social stereotypes about groups of people in language models) to uncover biases in language models based on aspects such as nationality and religion. "We work to understand and mitigate these biases and aim for cultural inclusivity too in our models," says Gupta. Ajjarapu adds that the company's focus is on "responsible governance, responsible research, and responsible impact." He cited the example of the Google SynthID -- an embedded watermark and metadata labelling solution that flags photos (deepfakes) generated using Google's text-to-image generator, Imagen.
Share
Share
Copy Link
Google DeepMind's Gemini AI platform has attracted over 1.5 million developers worldwide, with India emerging as one of the largest user bases. The company is expanding access to advanced AI models and emphasizing India's potential in driving AI innovation.
Google DeepMind's Gemini AI platform has achieved a significant milestone, with over 1.5 million developers worldwide now utilizing its capabilities. This rapid adoption underscores the growing interest in AI technologies among the global developer community. Notably, India has emerged as one of the largest user bases for Gemini, highlighting the country's increasing role in the AI landscape 1.
Vikas Agnihotri, Country Head and VP of Sales at Google India, emphasized the importance of India in the AI ecosystem. He stated, "India is one of the largest developer bases for Gemini globally," reflecting the country's strong presence in the tech industry and its growing expertise in AI technologies 2.
In a move to further support developers, especially in India, Google has announced expanded access to its advanced AI models. This includes Gemini 1.5 Pro and Gemma 2, which are now available to Indian developers. These models offer enhanced capabilities and are expected to drive innovation in various sectors 3.
Chandra Prakash Ajjarapu, Head of AI Partnerships at Google DeepMind, highlighted India's unique position in driving the next generation of AI innovation. He pointed out several factors contributing to this:
Ajjarapu emphasized that these factors position India to not only adopt AI technologies but also to create innovative solutions that address local and global challenges 4.
The widespread adoption of Gemini AI is expected to have far-reaching effects across multiple industries. Developers are leveraging the platform to create applications in areas such as:
As Gemini AI continues to gain traction, experts anticipate a surge in AI-driven innovations emerging from India. However, this rapid growth also brings challenges, including:
Google DeepMind remains committed to supporting developers and fostering responsible AI practices as the technology evolves and becomes more integrated into various aspects of daily life and business operations.
Reference
[1]
[2]
[3]
Google announces plans to enhance Gemini AI services in India, focusing on data sovereignty for businesses and multilingual support for individual users.
2 Sources
2 Sources
Google's Gemini AI model is fueling innovation in Indian startups. Karya, a data labeling platform, and Miko, an AI-powered educational robot company, are leveraging Gemini to enhance their products and services.
2 Sources
2 Sources
Google's I/O Connect event in Bengaluru showcased new AI models, tools, and programs to support Indian developers and startups. The tech giant announced partnerships with the Indian government and introduced initiatives to foster AI innovation in the country.
8 Sources
8 Sources
Google has announced significant updates to its AI offerings, including the integration of Gemini 1.5 into enterprise contact centers and new AI-powered features for Google Workspace. These advancements aim to revolutionize customer engagement and boost productivity in the workplace.
9 Sources
9 Sources
Google has announced the release of new Gemini models, showcasing advancements in AI technology. These models promise improved performance and capabilities across various applications.
2 Sources
2 Sources