Google AI: A Look Back at Major Announcements in 2024

2024 was a groundbreaking year for Google AI, marked by significant advancements and new products across diverse domains, from generative AI tools to healthcare, education, and environmental sustainability. Here's a look at the most notable developments:

Highlights

  • Bard became Gemini with advanced features for reasoning, coding, and multimodal tasks.
  • Generative AI Creativity: Tools like ImageFX, MusicFX, and TextFX empowered creators worldwide.
  • AI in Healthcare: Med-PaLM 2 and MedLM models improved diagnostics and administrative workflows.

Follow Us

Google AI: A Look Back at Major Announcements in 2024
2024 has been a significant year for Google AI in terms of both technological advancements and announcements. From Circle to Search features to the most recent launch of Gemini 2.0 models, 2024 has seen a wealth of product releases and updates from Google. As the company prepares for exciting AI announcements in 2025, let's take a look at some of the top Google AI news stories from the year 2024.

Also Read: Google AI Powers Over 300 Real-World Gen AI Use Cases: December 2024 Edition




January 2024

New Ways to Search

On January 17, 2024, Google's VP of Search, Elizabeth Reid, unveiled two major updates to its Search experience: Circle to Search and an AI-powered multisearch experience. Circle to Search allows users to highlight, circle, or tap on text, images, or videos on their Android phone screens to search without leaving their current app. This feature launched globally on January 31, 2024, for select devices, including the Pixel 8 and Samsung Galaxy S24. The updated multisearch leverages generative AI to provide detailed insights when users point their camera or upload a photo and ask questions, offering more nuanced and helpful results.

Google and Samsung Partnership

In January, Google's SVP of Platforms and Ecosystems, Hiroshi Lockheimer, announced an extended partnership with Samsung to bring AI capabilities to the new Samsung Galaxy S24 series using Google's Gemini AI models. Gemini Pro will power enhanced features in Samsung apps like Notes and Voice Recorder, enabling tasks such as lecture summarization, while Imagen 2 supports advanced photo editing in the Gallery app. The Galaxy S24 will also feature Gemini Nano for on-device AI in Google Messages, offering features like Magic Compose and Photomoji for personalized communication. Additionally, Circle to Search introduces a new way to search directly from the phone screen, and Android Auto gains AI-powered text summaries and smart replies.

Chrome with 3 Gen AI Features

On January 23, Vice President of Chrome, Parisa Tabriz, announced experimental AI features in Google Chrome. The M121 release introduced generative AI features aimed at improving web browsing efficiency and personalization. Users on Macs and Windows PCs in the US can enable these features via Chrome's "Experimental AI" settings. Key updates include Smart Tab Organization, AI-Generated Themes, and the "Help Me Write" tool.

Also Read: Google Launches Gemini 2.0: A New AI Model for the Agentic Era

February 2024

On February 1, Google Labs Product Manager Kristin Yim announced the release of ImageFX, a new image-generation tool powered by the Imagen 2 model, enabling users to create high-quality visuals with intuitive prompts and creative exploration features. Alongside updates to MusicFX for generating longer, higher-quality music tracks and TextFX for enhanced creative writing, these tools expand possibilities for AI-powered creativity. With safeguards like SynthID watermarks and metadata for transparency, Google emphasized that these tools align with its commitment to responsible AI and are available in the US, New Zealand, Kenya, and Australia.

On February 8, Sundar Pichai, CEO of Google and Alphabet, announced that Bard would be rebranded as Gemini and introduced Gemini Advanced, which uses Gemini Ultra 1.0. Gemini Advanced offers enhanced reasoning, coding, and creative collaboration. Gemini is now available in over 40 languages, with a mobile app debuting on Android and iOS. It also powers Workspace features like Gmail and Docs and integrates into Google Cloud, replacing Duet AI.

Sissie Hsiao, Vice President and General Manager of Gemini Experiences (formerly Bard) and Google Assistant, further explained the rebranding. Gemini Advanced, powered by the Ultra 1.0 model, enables complex tasks like coding, reasoning, and creative collaboration. A new mobile Gemini app is rolling out for Android and iOS, enabling multimodal AI assistance, from generating images to analyzing text or offering contextual help.

On February 15, Sundar Pichai and Demis Hassabis, CEO of Google DeepMind, introduced Gemini 1.5, a next-generation AI model with improvements in performance, efficiency, and long-context understanding. Using a Mixture-of-Experts (MoE) architecture, Gemini 1.5 Pro supports a 1 million-token context window, enabling it to process large amounts of content like long documents, videos, and large codebases.

On February 21, Jeanine Banks, VP and GM of Developer X, and Tris Warkentin, Director of Google DeepMind, introduced Gemma, a family of lightweight, open AI models designed for developers and researchers to build responsible AI solutions. Developed by Google DeepMind and inspired by the Gemini models, Gemma is available in two sizes—Gemma 2B and 7B—with pre-trained and instruction-tuned variants. These models deliver state-of-the-art performance for their sizes and can run on laptops, desktops, or cloud platforms like Google Cloud's Vertex AI. Google also partnered with Nvidia to optimize Gemma for Nvidia GPUs, ensuring performance and integration with cutting-edge technology.

Also Read: Google Launches AI Campus in London to Equip Students with Digital Skills

March 2024

AI in Health

In March, Google provided an update on its generative AI efforts in health, including product updates and new research. Yossi Matias, Vice President of Engineering and Research, announced that Fitbit and Google Research are working together to build a Personal Health Large Language Model (LLM) that can power personalized health and wellness features in the Fitbit mobile app, offering more insights and recommendations based on data from Fitbit and Pixel devices.

Google Health is also advancing AI in healthcare with Med-PaLM 2, a healthcare-optimized large language model (LLM) now used globally to build solutions for various tasks, including streamlining nurse handoffs and supporting clinician documentation. At the end of 2023, Google introduced MedLM, a family of foundation models for healthcare built on Med-PaLM 2 and available through the Vertex AI platform.

According to Google, New multimodal capabilities in models like MedLM for Chest X-ray aim to enhance radiology workflows by accurately classifying chest X-rays. The Gemini model fine-tuned for medicine achieves breakthroughs in reasoning and multimodal understanding, excelling in tasks like medical exams and image-based diagnostics.

"We're also seeing promising results from our fine-tuned models on complex tasks such as report generation for 2D images like X-rays, as well as 3D images like brain CT scans – representing a step-change in our medical AI capabilities. While this work is still in the research phase, there’s potential for generative AI in radiology to bring assistive capabilities to health organizations," Yossi Matias said.

Additionally, Fitbit and Google Research are developing a Personal Health LLM for personalized health insights, while tools like AMIE explore AI-assisted clinical conversations. Google highlighted that Generative AI is already working as an assistive tool for clinicians, helping them with administrative tasks like documentation that typically takes up hours of time.

Earlier in 2024, Google introduced AMIE (Articulate Medical Intelligence Explorer), a research AI system built on an LLM optimized for diagnostic reasoning and clinical conversations. Google announced plans to test this system with healthcare organizations to evaluate its potential in supporting clinical conversations.

On March 19, 2024, Karen DeSalvo, Chief Health Officer at Google, highlighted several updates during Google Health's Check Up event, showcasing how AI is helping the company connect people to health information and insights. New features include AI-powered Google Lens for visual health searches, now available in over 150 countries, and enhanced visual health results on mobile for conditions like migraines and pneumonia. Additionally, YouTube tools like Aloud are breaking language barriers by translating and dubbing health videos, such as first-aid guides and courses on implicit bias in healthcare. With Fitbit Labs, users can explore experimental AI features that offer deeper insights into personal health data, according to Google.

"At Google, we believe AI has the potential to transform health for everyone, everywhere, not just some people in some places," Karen DeSalvo added.

On March 20, Yossi Matias, Vice President of Engineering and Research, announced that Google is using AI for reliable flood forecasting on a global scale. "A paper published in Nature today shows how Google Research uses AI to accurately predict riverine flooding and help protect livelihoods in over 80 countries up to 7 days in advance, including in data-scarce and vulnerable regions."

"Our research work began with an initial pilot in India’s Patna region. Bihar, where Patna is located, is one of India's most flood-prone states, where a large part of the population lives under the recurring threat of devastating floods. Working with local government officials and using local real-time data, we created flood forecasts, which we incorporated into Google Public Alerts in 2018," Yossi Matias explained.

Generative AI Accelerator

On March 28, Annie Lewin, Senior Director of Global Advocacy and Head of Asia Pacific at Google.org, announced the launch of the Generative AI Accelerator, a six-month program supporting 21 nonprofits with USD 20 million in funding, technical training, mentorship, and AI coaching.

Organizations that joined the accelerator program include Benefits Data Trust, Beyond 12, CareerVillage, Climate Policy Radar, CodePath, EIDU, Full Fact, IDinsight, Jacaranda Health, Justicia Lab, Materiom, mRelief, Opportunity@Work, Partnership to End Addiction, Quill.org, Tabiya, Tarjimly, U.S. Digital Response, and the World Bank.

April 2024

In April, Google announced the launch of AI editing tools for Google Photos users, alongside investments in AI infrastructure and skills.

On April 9, 2024, Sundar Pichai shared some of the highlights from Cloud Next '24.

Google highlighted its advancements in AI, including the launch of Gemini 1.5 Pro and TPU v5p, offering improved performance and scalability for AI models. The company is expanding AI capabilities through Vertex AI, enhancing code generation and cybersecurity tools, and integrating generative AI features into Google Workspace. Notable customer innovations include Mercedes-Benz using AI for sales and customer service, Uber improving employee productivity with AI agents, and a partnership with Palo Alto Networks to enhance cybersecurity.

On April 10, 2024, at Google Cloud Next '24, Thomas Kurian, CEO of Google Cloud, announced advancements across Google Cloud and its related service portfolio.

"More than 60 percent of funded gen AI startups and nearly 90 percent of gen AI unicorns are Google Cloud customers, including companies like Anthropic, AI21 Labs, Contextual AI, Essential AI, and Mistral AI who are using our infrastructure," Thomas Kurian said, adding, "Eterprises like Deutsche Bank, Estee Lauder, Mayo Clinic, McDonald’s, and WPP are building new gen AI applications on Google Cloud. And today, we are announcing new or expanded partnerships with Bayer, Cintas, Discover Financial, IHG Hotels and Resorts, Mercedes Benz, Palo Alto Networks, Verizon, WPP, and many more."

"Customers, including Best Buy, Etsy, The Home Depot, ING Bank, and many others, are seeing the benefits of powerful, accurate, and innovative agents that make generative AI so revolutionary," Kurian added, noting, "AI companies globally, like Bending Spoons and Kakao Brain, are building their models on our platform."

Orange, which operates in 26 countries where local data must be stored in each country, leverages AI on GDC to improve network performance and enhance customer experiences, he explained.

He also added that customers like Bayer, Mayo Clinic, Mercado Libre, NewsCorp, and Vodafone are already seeing the benefits of AI with data. Additionally, Walmart is building data agents to modernize its shopping experiences.

"Using Gemini, we've enriched our data, helping us improve millions of product listings across our site and ultimately enabling customers to make better decisions when they shop with Walmart," said Suresh Kumar, EVP, Global Chief Technology Officer and Chief Development Officer at Walmart.

On April 26, Ruth Porat, President and Chief Investment Officer; and Chief Financial Officer of Alphabet and Google, announced a USD 3 billion investment to build or expand data center campuses in Virginia and Indiana (internet infrastructure in the US), as well as a USD 75 million AI Opportunity Fund and Google AI Essentials Course. Beyond the nonprofit sector, Google also announced partnerships with employers such as Citigroup and educational institutions like Miami Dade College to expand access to AI skilling programs.

May 2024

At Google I/O 2024, the company's annual developer conference, Sundar Pichai shared how Google is building more helpful products and features with AI — including improvements across Search, Workspace, Photos, Android, and more.

At the event, Google revealed numerous AI advancements, including enhancements in AI models, generative media tools, search improvements, Workspace integrations, Android features, and developer tools. Highlights include the introduction of Gemini 1.5 Flash and Pro models, the unveiling of Imagen 3 for high-quality image generation, and the introduction of Veo for video generation. Additionally, Google announced updates to Search with AI Overviews, planning capabilities, and multi-step reasoning.

Gemini models are now integrated into Gmail, Docs, and other Workspace tools, while Android advancements include multimodal capabilities for Gemini Nano and enhanced privacy features. Developers can benefit from the Gemini API Developer Competition, new open-source models, and improved tools for Android development. Google also emphasized responsible AI practices, including red teaming and the expansion of SynthID for text and video watermarking.

At the event, Google also introduced LearnLM, a new family of models based on Gemini and fine-tuned for learning.

On May 8, 2024, Google introduced AlphaFold 3, a new AI model developed by Google DeepMind and Isomorphic Labs. "By accurately predicting the structure of proteins, DNA, RNA, ligands, and more, and how they interact, we hope it will transform our understanding of the biological world and drug discovery," Google said at the time of the announcement.

So far, millions of researchers worldwide have used AlphaFold 2 to make discoveries in areas including malaria vaccines, cancer treatments, and enzyme design, Google said in a blog post.

AlphaFold 3 enables drug design by predicting molecules commonly used in drugs, such as ligands and antibodies, that bind to proteins and change how they interact in human health and disease. Using AlphaFold 3 in combination with a complementary suite of in-house AI models, Isomorphic Labs is working on drug design for internal projects, as well as with pharmaceutical partners, Google said.

June 2024

On June 6, Brian Sullivan, the Senior Program Manager for Geo Sustainability, announced in a blog post the launch of AI-powered datasets by Global Fishing Watch. These datasets map global ocean infrastructure and vessels that do not broadcast on public monitoring systems.

Global Fishing Watch, co-founded by Google, released two new datasets to understand the impact of human activity on the seas. According to Google, these datasets map global ocean infrastructure and non-broadcasting vessels, providing insights into offshore renewable energy development, carbon emissions from maritime vessels, and marine protection. Researchers used Google Earth Engine and AI to analyze satellite imagery, creating a first-of-its-kind global map that depicts daily human activity at sea.

On June 27, 2024, Clement Farabet, VP of Research at Google DeepMind, and Tris Warkentin, Director at Google DeepMind, announced the official release of Gemma 2 to researchers and developers worldwide.

"Gemma, a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. We've continued to grow the Gemma family with CodeGemma, RecurrentGemma and PaliGemma — each offering unique capabilities for different AI tasks and easily accessible through integrations with partners like Hugging Face, Nvidia and Ollama," Google said.

Also Read: Google Announces AI Collaborations for Healthcare, Sustainability, and Agriculture in India

July 2024

On July 18, Heather Adkins, VP of Security Engineering, and Phil Venables, Vice President, Chief Information Security Officer (CISO), Google Cloud, announced the launch of the Coalition for Secure AI (CoSAI) and founding member organizations. The new industry forum will invest in AI security and leverage Google's Secure AI Framework.

At the Aspen Security Forum, Google introduced the Coalition for Secure AI (CoSAI), a collaborative initiative aimed at developing comprehensive security measures for AI. CoSAI includes founding members Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, Nvidia, OpenAI, PayPal, and Wiz — and it will be housed under OASIS Open, the international standards and open-source consortium.

According to Google, the initiative will focus on addressing the unique security challenges posed by AI, both in real time and for future risks.

On the same date, i.e., July 18, 2024, Google shared three things that parents and students shared with Google about how generative AI can support learning. According to Google, AI can enhance learning by providing real-time feedback for parents, offering deeper insights into unfamiliar subjects, and customizing learning for students with diverse needs.

However, Google emphasizes that teachers remain irreplaceable, with AI serving as a tool to support them rather than replace them. By keeping educators at the core of the process, AI can help reduce administrative tasks, allowing teachers to focus on inspiring and nurturing students’ curiosity.

"While generative AI is poised to play a pivotal role in education, human educators remain irreplaceable. Generative AI can enhance learning, without undermining it, which is why guardrails and transparency are key to making generative AI the powerful tool that it promises to be for education," Google said.

On July 15, Amar Subramanya, Vice President, Engineering, Gemini Experiences, announced bringing 1.5 Flash to Gemini in more than 40 languages and over 230 countries and territories, releasing a new related content feature, and expanding Gemini for Teens and mobile app experiences.

August 2024

In August 2024, at the Made by Google event, the company made several key hardware announcements accompanied by software updates. Sameer Samat, President of the Android Ecosystem, announced the infusion of AI across every part of Google's tech stack — from data center infrastructure to the operating system to devices.

Gemini will help complete complex tasks and generate images, improving overall user experience.

September 2024

On September 16, Christopher Van Arsdale, Climate and Energy Lead at Google Research, announced the launch of Audio Overviews in NotebookLM and Google's partnership with wildfire authorities to launch FireSat, a new global satellite constellation designed specifically to detect and track wildfires the size of a classroom within 20 minutes.

Also Read: Google AI Innovations: Key Announcements From October and November 2024

October 2024

October saw additional AI updates across products like Pixel, NotebookLM, Search, and Shopping, which were already covered in previous stories.

November 2024

In November, Google launched the Gemini app for iPhone, introduced new ways to shop with Google Lens, Maps, and more.

Also Read: Google AI Announcements in December 2024: From Agentic AI to Quantum Computing

December 2024

In December, Google introduced the Gemini 2.0 model, dubbed for the agentic era, and the Willow quantum chip, among other innovations.

Recent Comments

Faraz :

It's good that Jio is loosing.. this is loot in 5G era, it's not like 4G plans can't have 100…

Vodafone Idea Strengthens its Annual Prepaid Plans with SuperHero Benefits…

Faraz :

1 year passed since this..Still VoNR also not stable nor 5G banda optimised properly.

Jio Completed Fastest 5G Rollout in the World in Dec:…

Faraz :

1.5 GB per day.. (crying emoji)

Vodafone Idea CEO Believes in Paying More for Using More

Faraz :

I expect Airtel to shut down 2G in Maharashtra, & refarm 20 MHz B3 for 5G while keeping B5, B1…

Vodafone Idea to Launch 5G in 75 Cities, Targets Market…

Faraz :

It will neither make nor break Vi, they will just continue to survive like before. 1st 5G rollout in 17…

Vodafone Idea to Launch 5G in 75 Cities, Targets Market…

Load More
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments