Google, on March 12, unveiled the next generation of its open AI model family, called Gemma 3. The company claims that it's the "world's best single-accelerator," outperforming competition from Facebook's Llama 3, DeepSeek V3, and OpenAI's o3-mini in performance on a host with a single GPU. It also boasts optimised capabilities for running on Nvidia's GPUs and dedicated AI hardware.
Also Read: Google AI: A Look Back at Major Announcements in 2024
1. Google Gemma 3 Models
Built on the same research powering Gemini 2.0, Gemma 3 models offer high performance while being optimised for diverse hardware, from mobile devices to high-end GPUs. Gemma 3 comes in four sizes—1B, 4B, 12B, and 27B parameters—allowing developers to choose the best fit for their needs. It offers out-of-the-box support for over 35 languages and pretrained support for over 140 languages with a 128k-token context window.
Google introduced the Gemma family of open models in February 2024 as part of its strategy to attract developers and researchers to its AI offerings and compete with Meta's Llama. Google said these models have been downloaded over 100 million times, and the developer community has created more than 60,000 Gemma variants to date.
According to the blog post, these models are designed to run fast, directly on devices — from phones and laptops to workstations — helping developers create AI applications, wherever people need them as well as the ability to analyse text, images, and short videos.
The new model also introduces advanced text and visual reasoning, function calling for AI-driven workflows, and official quantized versions to boost performance while reducing computational requirements and costs.
Alongside Gemma 3, Google has launched ShieldGemma 2, a 4B image safety model that classifies content for safer AI applications. It provides customisable safety checks across categories like dangerous content, explicit imagery, and violence, giving developers greater control over AI-generated visuals.
Google said Gemma 3 and ShieldGemma 2 integrate with popular AI tools, including Hugging Face, Ollama, PyTorch, JAX, and Google AI Edge. Developers can start building immediately via Google AI Studio, Kaggle, or Hugging Face, with deployment options across Vertex AI, Cloud Run, and Nvidia's API Catalog. Optimisations for Nvidia GPUs, Google Cloud TPUs, and AMD ROCm ensure smooth execution on various hardware platforms, according to Google.
Google is continuing to promote Gemma with Google Cloud credits, and the Gemma 3 Academic program will allow academic researchers to apply for USD 10,000 worth of cloud credits to accelerate their Gemma 3-based research.
Google DeepMind Unveils Gemini Robotics for Advanced AI-Powered Robots
Google DeepMind has introduced Gemini Robotics and Gemini Robotics-ER, two AI models based on Gemini 2.0 that bring embodied reasoning to robots, enabling them to interact with the physical world. These models mark a step in AI-driven robotics by enhancing spatial understanding and multimodal reasoning across text, images, audio, and video, according to Google.
Gemini Robotics, the more advanced of the two, is a vision-language-action (VLA) model that incorporates physical actions as an output, allowing direct control of robots. It is designed to make robots more general, interactive, and dexterous, enabling them to adapt to new situations, respond dynamically to instructions, and perform complex tasks requiring fine motor skills. The model has been trained to function across different robotic platforms, including ALOHA 2, Franka arms, and the humanoid Apollo robot developed by Apptronik.
"Because it's built on a foundation of Gemini 2.0, Gemini Robotics is intuitively interactive. Gemini Robotics is intuitively interactive. It taps into Gemini's advanced language understanding capabilities and can understand and respond to commands phrased in everyday, conversational language and in different languages," said Carolina Parada, the senior director and head of robotics at Google DeepMind.
Meanwhile, Gemini Robotics-ER enhances spatial reasoning, allowing roboticists to run their own programs using Gemini's embodied intelligence. The model excels in 3D object detection, spatial planning, and intuitive grasping, improving robot efficiency in real-world tasks. It demonstrates a 2x-3x improvement over previous versions, offering end-to-end control over perception, state estimation, and planning, according to the blog post.
Google DeepMind said it is integrating Gemini Robotics-ER with low-level controllers to prevent unsafe actions and developing a data-driven constitution framework to align robotic behavior with human values.
Last year, Google DeepMind introduced its "Robot Constitution," a set of Isaac Asimov-inspired rules for its robots to follow. The company also released the new ASIMOV dataset to advance safety research in AI-driven robotics.
Google DeepMind is working with Apptronik to "build the next generation of humanoid robots." It's also giving "trusted testers" access to its Gemini Robotics-ER model, including Agile Robots, Agility Robotics, Boston Dynamics, and Enchanted Tools.
Google Announces SpeciesNet, an AI Model to Identify Animal Species
Google has announced the open-source release of SpeciesNet, an AI model designed to identify animal species by analysing photos from camera traps.
The motion-triggered wildlife cameras, or "camera traps," generate vast quantities of image data. Manual processing of the images taken through camera traps is a "significant bottleneck," according to Google. An AI-based solution can accelerate image processing, and the SpeciesNet AI model is claimed to be one such solution.
In a Github blog post dedicated to the SpeciesNet AI model, Google said: "The species classifier (SpeciesNet) was trained at Google using a large dataset of camera trap images and an EfficientNet V2 M architecture. It is designed to classify images into one of more than 2000 labels, covering diverse animal species, higher-level taxa (like 'mammalia' or 'felidae'), and non-animal classes ('blank,' 'vehicle'). SpeciesNet has been trained on a geographically diverse dataset of over 65 million images, including curated images from the Wildlife Insights user community, as well as images from publicly available repositories."
The SpeciesNet ensemble combines these two models using a set of heuristics and, optionally, geographic information to assign each image to a single category.
"Since 2019, thousands of wildlife biologists have used SpeciesNet through a Google Cloud-based tool called Wildlife Insights to streamline biodiversity monitoring and inform conservation decision-making. The SpeciesNet AI model release will enable tool developers, academics and biodiversity-related startups to scale monitoring of biodiversity in natural areas," said Mike Werner, Head of Sustainability Programs and Innovation at Google, in a blog post on March 3.
Google for Startups Accelerator: AI for Nature and Climate
Google also announced the launch of its first AI-focused startup accelerator to support companies developing technology to protect and restore nature. The 10-week virtual program, open to startups across the Americas, will provide mentoring and technical guidance from Google engineers and industry experts. Applications open from March 3 to March 31, 2025, with the program kicking off in May 2025.
Google.org has pledged USD 3 million to iCS (Instituto Clima e Sociedade) to fund AI-driven projects from Brazilian nonprofits and research centers in three key areas, including reversing biodiversity loss, bioeconomy and regenerative agriculture.
Also Read: OpenAI and Anduril Partner to Develop AI Solutions for US National Security
2. Palantir to Work With Archer Aviation
Archer Aviation and Palantir Technologies announced a partnership on March 13, aiming to "build the AI foundation for the future of next-gen aviation technologies."
The two companies plan to leverage Palantir Foundry and AIP to accelerate the scaling of Archer's aircraft manufacturing capabilities at its facilities in Georgia and Silicon Valley, with the intent to advance the development of software solutions to drive innovation across the entire value chain.
This partnership includes developing software utilising AI to improve a range of aviation systems, including air traffic control, movement control and route planning, with the goal of improving efficiency, safety and affordability across the industry, the companies said.
"By integrating Palantir's advanced AI capabilities with Archer's innovative approach to aircraft manufacturing and operations, we are setting the stage for a transformative leap in efficiency, safety and sustainability," said Alex Karp, Palantir co-founder and CEO.
Adam Goldstein, Archer's founder and CEO, said, "While the aviation industry has an unmatched level of safety, much of the legacy technology supporting the industry has only incrementally advanced. AI and software present an inflection point that will shape the future of aviation. We're proud to be partnering with Karp and the entire Palantir team to build the AI backbone for the next generation of aviation."
Also Read: Qualcomm Unveils Dragonwing Brand for Industrial and Embedded IoT Solutions
3. Qualcomm and Palantir Partner to Bring AI to the Edge
Big data analytics software company Palantir Technologies is bringing its Ontology-based data modelling platform and AI capabilities to edge IoT devices containing Qualcomm Dragonwing chipsets, according to a Qualcomm announcement on March 13.
This collaboration enables AI-powered applications built using Palantir's OSDK and AIP to be run directly on devices powered by Dragonwing platforms for hardware accelerated multimedia and AI experiences, even in partially disconnected or fully air-gapped environments.
Qualcomm introduced the Dragonwing brand at MWC25 to carve out a distinct market presence for its IoT and automotive-focused wireless chips, which were previously marketed under its Snapdragon mobile banner. On March 10, Qualcomm announced an agreement to acquire Edge Impulse, which the company says will enhance its offerings for developers and expand its leadership in AI capabilities to power AI-enabled products and services across IoT.
According to the official release, Edge Impulse's edge AI platform enables over 170,000 developers to create, deploy, and monitor AI models on a wide array of edge devices—with support for varied microcontrollers and processors featuring AI accelerators from multiple semiconductor providers.
"This collaboration between Qualcomm and Palantir aims to extend AI capabilities to the edge to harness real-time insights and make data-driven decisions with unprecedented speed and accuracy, even in offline remote environments," the official release said.
Also Read: CES 2025: Qualcomm Unveils AI Innovations and Collaborations Across Multiple Sectors
By combining Qualcomm Technologies' AI-powered edge processors and connected software with Palantir's AI platform, ODMs, OEMs and enterprise customers will be able to build and deploy AI solutions for a variety of sectors, stating with manufacturing, industrial, and automotive.
"By harnessing Qualcomm Technologies' advanced edge AI capabilities across a broad portfolio of devices via our Qualcomm AI Stack and connected services and using Palantir's Ontology enterprise offerings, our customers can revolutionize how data is processed and utilised. This powerful combination delivers transformative benefits across industries, enhancing real-time insights and decision-making at the edge," said Nakul Duggal, group general manager, automotive, industrial and embedded IoT, and cloud computing at Qualcomm Technologies.
Robert Imig, head of USG research and development, Palantir Technologies, added, "Our collaboration with Qualcomm Technologies is a significant step to bring one of Palantir's most core elements, the Ontology, to the edge. This breakthrough will enable our customers to mirror their data modeling in cloud stacks with remote edge devices powered by Qualcomm Technologies, thus enabling seamless ability to port logic and AI from the cloud to the edge, providing Palantir grade decision making for Industrial IoT use-cases. Together, we are transforming how industries operate, ensuring that the most advanced software is accessible wherever needed."
Also Read: Anthropic Launches AI Hybrid Reasoning Model and New Coding Assistant
4. CommBank Expands Partnership with Anthropic
Commonwealth Bank of Australia (CBA) announced on March 14 an expanded partnership and investment in artificial intelligence (AI) safety and research company, Anthropic. The expanded partnership will enable CBA to leverage Anthropic's AI capabilities and expertise in safety and AI practices to help accelerate AI adoption.
Gavin Munroe, Group Chief Information Officer at CBA, said: "AI is delivering more personalised and intuitive customer experiences today and it holds great potential to reimagine how our customers bank in the future. This enhanced strategic partnership and investment in Anthropic will help our teams to accelerate our AI capability."
"With our shared values and vision, this partnership is also intended to uplift and unlock AI potential for our engineers, who will have an opportunity to work with Anthropic's AI experts to explore how we can accelerate how we build the products to serve and protect our customers, including how we can further protect customers and communities from scams and fraud."
Krishna Rao, Anthropic's Chief Financial Officer, said: "By combining Anthropic's advanced AI capabilities with CBA's deep financial expertise we can create more personalised experiences for customers while maintaining the highest standards for safety and security. Our teams look forward to working closely with CBA's engineers and data scientists to explore innovative applications of our technology, particularly in critical areas like fraud prevention and customer service enhancement."
"The partnership will support CBA's focus on important AI use cases and help to strengthen CBA's in-house technology capabilities," the joint statement said.
Anthropic Raises Series E at a USD 61.5 Billion Valuation
Earlier in March, Anthropic raised USD 3.5 billion at a USD 61.5 billion post-money valuation. The round was led by Lightspeed Venture Partners, with participation from Bessemer Venture Partners, Cisco Investments, D1 Capital Partners, Fidelity Management and Research Company, General Catalyst, Jane Street, Menlo Ventures and Salesforce Ventures, among other new and existing investors.
With this investment, Anthropic said it will advance its development of AI systems, expand its compute capacity, deepen its research in mechanistic interpretability and alignment, and accelerate its international expansion.
This announcement follows the launch of Claude 3.7 Sonnet and Claude Code, previously reported by TelecomTalk. Anthropic said "Businesses across Industries from fast-growing startups like Cursor and Codeium to global corporations like Zoom, Snowflake and Pfizer—are turning to Claude to transform their operations."
"Replit integrated Claude into 'Agent' to turn natural language into code, driving 10X revenue growth; Thomson Reuters' tax platform CoCounsel uses Claude to assist tax professionals; Novo Nordisk has used Claude to reduce clinical study report writing from 12 weeks to 10 minutes; and Claude now helps to power Alexa+, bringing advanced AI capabilities to millions of households and Prime members," the AI startup highlighted.
Also Read: Amazon Unveils More Conversational AI Assistant Alexa+, Free for Prime Subscribers
Claude and Alexa+
On February 26, Anthropic announced that its Claude models are helping power Alexa+, Amazon's next-generation AI assistant. This collaboration is part of Amazon's ongoing partnership with Anthropic to "deliver advanced AI technology to businesses and consumers worldwide."
Amazon and Anthropic teams worked closely over the past year to integrate Claude's capabilities into Alexa+ through Amazon Bedrock, Amazon's platform designed to simplify the development of generative AI applications.