Sundar Pichai, CEO of Alphabet and Google, announced a significant leap forward in artificial intelligence (AI) with the introduction of Google Gemini AI models at the company’s I/O conference this week.
Pichai emphasized Google’s extensive experience in AI, stating the company has been “innovating at every layer of the stack” for over a decade. He declared the arrival of the “Gemini era” for Google.
Gemini’s capabilities will be integrated across various Google products, including Search, Photos, Workspace, and Android.
“We’re still in the early stages of this AI platform shift,” Pichai acknowledged, “but the potential for creators, developers, startups, and everyone is immense.”
Over 1 million users have signed up for Gemini Advanced in just three months, demonstrating strong interest in Google’s most advanced AI models. Additionally, over 1.5 million developers are already utilizing Gemini models for various purposes, including debugging code, generating insights, and building next-generation AI applications.
A key strength of Gemini AI is its ability to process information across various formats, including text, images, video, and code. The 1.5 Pro version of Gemini takes a significant leap forward in handling long contexts, consistently processing a record-breaking 1 million tokens in production.
“We’ve also introduced new experiences, including mobile access through the Gemini app, now available on both Android and iOS,” Pichai added.
One of the most transformative applications of Gemini will be in Google Search. Pichai announced the launch of “AI Overviews,” a completely revamped search experience, which will be available to all users in the US this week, with a global rollout planned soon.