Nvidia announced and showcased many new products and services at CES 2025 in Las Vegas last month. Nvidia's founder and CEO Jensen Huang, delivered a keynote that included the company's new products to advance gaming, autonomous vehicles, robotics and agentic AI. AI is advancing at an "incredible pace," Huang told the audience at the event, saying, "It started with perception AI — understanding images, words and sounds. Then generative AI — creating text, images and sound. Now, we're entering the era of physical AI, AI that can proceed, reason, plan and act."
Also Read: CES 2025: Qualcomm Unveils AI Innovations and Collaborations Across Multiple Sectors
Huang explained that Nvidia's GPUs and platforms are the core of this transformation, enabling breakthroughs across industries, including gaming, robotics and autonomous vehicles (AVs).
In 1999, Nvidia invented the programmable GPU. Since then, modern AI has fundamentally changed how computing works, he said. "Every single layer of the technology stack has been transformed, an incredible transformation, in just 12 years."
Huang's keynote showcased how Nvidia's latest innovations are enabling this new era of AI. These include:
1. Nvidia Cosmos World Foundation Model Platform
Nvidia announced Cosmos, a platform with generative world foundation models (WFMs) designed to accelerate AI development for autonomous vehicles (AVs) and robotics.
Cosmos enables physics-based AI training using synthetic data, reducing the need for costly real-world testing. It features advanced tokenisers, AI-accelerated video processing, and reinforcement learning tools. Cosmos is open-source, includes AI safety measures, and is available on Hugging Face and Nvidia NGC.
Nvidia said major robotics and automotive companies, 1X, Agile Robots, Agility, Figure AI, Foretellix, Fourier, Galbot, Hillbot, IntBot, Neura Robotics, Skild AI, Virtual Incision, Waabi and XPENG, along with ridesharing giant Uber, are among the first to adopt Cosmos.
"The ChatGPT moment for robotics is coming. Like large language models, world foundation models are fundamental to advancing robot and AV development, yet not all developers have the expertise and resources to train their own," said Jensen Huang, founder and CEO of Nvidia. "We created Cosmos to democratise physical AI and put general robotics in reach of every developer."
"The ChatGPT moment for general robotics is just around the corner," he explained.
According to Nvidia, 1X, an AI and humanoid robot company, launched the 1X World Model Challenge dataset using Cosmos Tokenizer. XPENG will use Cosmos to accelerate the development of its humanoid robot, while Hillbot and Skild AI are using Cosmos to fast-track the development of their general-purpose robots.
"Data scarcity and variability are key challenges to successful learning in robot environments," said Pras Velagapudi, chief technology officer at Agility. "Cosmos' text-, image- and video-to-world capabilities allow us to generate and augment photorealistic scenarios for a variety of tasks that we can use to train models without needing as much expensive, real-world data capture."
"Generative AI will power the future of mobility, requiring both rich data and very powerful compute," said Dara Khosrowshahi, CEO of Uber. "By working with Nvidia, we are confident that we can help supercharge the timeline for safe and scalable autonomous driving solutions for the industry."
2. Nvidia Announces Nemotron Model Families to Advance Agentic AI
Nvidia also announced new Llama Nemotron large language models and Cosmos Nemotron vision language models that developers can use for enterprise AI use cases in healthcare, financial services, manufacturing and more.
Announcing the Nemotron Model Families, Nvidia, said, "Artificial intelligence is entering a new era — agentic AI — where teams of specialised agents can help people solve complex problems and automate repetitive tasks."
"With custom AI agents, enterprises across industries can manufacture intelligence and achieve unprecedented productivity. These advanced AI agents require a system of multiple generative AI models optimised for agentic AI functions and capabilities," Nvidia said.
To provide a foundation for enterprise agentic AI, Nvidia announced the Llama Nemotron family of open large language models (LLMs) on January 6, 2025. Built with Llama, the models, according to the company, can help developers create and deploy AI agents across a range of applications — including customer support, fraud detection, and product supply chain and inventory management optimisation.
"Agentic AI is the next frontier of AI development, and delivering on this opportunity requires full-stack optimisation across a system of LLMs to deliver efficient, accurate AI agents," said Ahmad Al-Dahle, vice president and head of GenAI at Meta. "Through our collaboration with Nvidia and our shared commitment to open models, the Nvidia Llama Nemotron family built on Llama can help enterprises quickly create their own custom AI agents."
AI agent platform providers including SAP and ServiceNow are expected to be among the first to use the new Llama Nemotron models.
"AI agents that collaborate to solve complex tasks across multiple lines of the business will unlock a whole new level of enterprise productivity beyond today's generative AI scenarios," said Philipp Herzig, chief AI officer at SAP. "Through SAP's Joule, hundreds of millions of enterprise users will interact with these agents to accomplish their goals faster than ever before. Nvidia's new open Llama Nemotron model family will foster the development of multiple specialised AI agents to transform business processes."
"AI agents make it possible for organisations to achieve more with less effort, setting new standards for business transformation," said Jeremy Barnes, vice president of platform AI at ServiceNow. "The improved performance and accuracy of Nvidia's open Llama Nemotron models can help build advanced AI agent services that solve complex problems across functions, in any industry."
Nvidia Cosmos Nemotron, Llama Nemotron and NeMo Retriever supercharge the new Nvidia Blueprint for video search and summarisation, Nvidia said.
3. Nvidia Blackwell-based GeForce RTX 50 Series GPUs
Nvidia introduced the GeForce RTX 50 Series GPUs, powered by the Blackwell architecture, aimed at gamers, creators, and developers. These GPUs feature advanced AI-driven rendering, including neural shaders, digital human technologies, and enhanced geometry and lighting.
"Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives," said Jensen Huang, founder and CEO of Nvidia. "Fusing AI-driven neural rendering and ray tracing, Blackwell is the most significant computer graphics innovation since we introduced programmable shading 25 years ago."
"GeForce enabled AI to reach the masses, and now AI is coming home to GeForce," Huang said.
"Here it is — our brand-new GeForce RTX 50 series, Blackwell architecture," Huang said, holding the blacked-out GPU aloft and noting how it's able to harness advanced AI to enable breakthrough graphics. "The GPU is just a beast."
The GeForce RTX 5090 GPU — the fastest GeForce RTX GPU to date — features 92 billion transistors, providing over 3,352 trillion AI operations per second (TOPS) of computing power, Nvidia said.
AI-Powered Tools for Creators: Supports AI image generation and tools for live streaming, including Virtual Key Light and Studio Voice.
The GeForce RTX 5090 and 5080 GPUs are available now, with RTX 5070 Ti and 5070 coming in February. Laptop models will be available starting in March and April.
4. Nvidia Launches AI Foundation Models for RTX AI PCs
Nvidia unveiled foundation AI models running locally on RTX AI PCs, powered by GeForce RTX 50 Series GPUs. These models, offered as NIM microservices, enhance digital humans, content creation, productivity, and AI development.
"These AI models run in every single cloud because Nvidia GPUs are now available in every single cloud," Huang said. "It's available in every single OEM, so you could literally take these models, integrate them into your software packages, create AI agents and deploy them wherever the customers want to run the software."
"AI is advancing at light speed, from perception AI to generative AI and now agentic AI," said Jensen Huang. "NIM microservices and AI Blueprints give PC developers and enthusiasts the building blocks to explore the magic of AI."
"GeForce RTX 50 Series GPUs with FP4 compute will unlock a massive range of models that can run on PC, which were previously limited to large data centers," said Robin Rombach, CEO of Black Forest Labs. "Making FLUX an Nvidia NIM microservice increases the rate at which AI can be deployed and experienced by more users, while delivering incredible performance."
"AI is driving Windows 11 PC innovation at a rapid rate, and Windows Subsystem for Linux (WSL) offers a great cross-platform environment for AI development on Windows 11 alongside Windows Copilot Runtime," said Pavan Davuluri, corporate vice president of Windows at Microsoft. "Nvidia NIM microservices, optimised for Windows PCs, give developers and enthusiasts ready-to-integrate AI models for their Windows apps, further accelerating deployment of AI capabilities to Windows users."
NIM-ready RTX AI PCs will be available from Acer, ASUS, Dell, HP, Lenovo, MSI, Razer, Samsung, and others, Nvidia said.
Also Read: CES 2025: Here Technologies Unveils AI Mapping Innovations and Strengthens Partnerships
5. Nvidia Project DIGITS - World's Smallest AI Supercomputer
Nvidia has announced Project DIGITS, a personal AI supercomputer powered by the GB10 Grace Blackwell Superchip, delivering 1 petaflop of AI performance for prototyping, fine-tuning and running large AI models. Designed for researchers, data scientists, and students, it enables local AI model development and deployment to cloud or data centers.
"Every software engineer, every engineer, every creative artist — everybody who uses computers today as a tool — will need an AI supercomputer," Huang said.
"This is Nvidia's latest AI supercomputer," Huang said, showcasing the device. "It runs the entire Nvidia AI stack — all of Nvidia software runs on this. DGX Cloud runs on this."
"AI will be mainstream in every application for every industry. With Project DIGITS, the Grace Blackwell Superchip comes to millions of developers," said Huang. "Placing an AI supercomputer on the desks of every data scientist, AI researcher and student empowers them to engage and shape the age of AI."
Featuring 128GB unified memory, up to 4TB storage, and support for 200-billion-parameter LLMs, Project DIGITS offers high efficiency and enterprise-grade AI software. Project DIGITS will be available in May from Nvidia and partners, starting at USD 3,000.
6. Nvidia Announces DRIVE Hyperion AV Platform
Nvidia announced on January 6, 2025, that its DRIVE Hyperion autonomous vehicle (AV) platform has passed safety assessments by TUV SUD and TUV Rheinland - two of the industry's leading authorities on automotive-grade safety and cybersecurity.
"The autonomous vehicle revolution is here," Huang said. "Building autonomous vehicles, like all robots, requires three computers: Nvidia DGX to train AI models, Omniverse to test drive and generate synthetic data, and DRIVE AGX, a supercomputer in the car."
"A billion vehicles driving trillions of miles each year move the world. With autonomous vehicles — one of the largest robotics markets — now here, the Nvidia Blackwell-powered platform will shift this revolution into high gear," said Huang. "The next wave of autonomous machines will rely on physical AI world foundation models to understand and interact with the real world, and Nvidia DRIVE is purpose-built for this new era, delivering unmatched functional safety and AI."
According to Nvidia, the industry's first end-to-end autonomous driving platform, DRIVE Hyperion includes DRIVE AGX Thor, a Blackwell-powered SoC designed for passenger and commercial vehicles. With ISO 21434 and ISO 26262 ASIL D compliance, it meets stringent safety and cybersecurity standards.
According to Nvidia, DRIVE Hyperion is already adopted by automotive companies such as Mercedes-Benz, JLR, and Volvo Cars.
Nvidia also said it is the first platform company accredited to inspect AV systems for safety.
Huang also highlighted the critical role of synthetic data in advancing autonomous vehicles. Real-world data is limited, so synthetic data is essential for training the autonomous vehicle data factory, he explained.
7. Nvidia Launches DRIVE AI Systems Inspection Lab
Additionally, Nvidia launched the DRIVE AI Systems Inspection Lab. Nvidia says the new lab will help automotive ecosystem partners navigate evolving industry standards for autonomous vehicle safety. The lab, which launched on January 6, will focus on inspecting and verifying that automotive partner software and systems on the Nvidia DRIVE AGX platform meet the automotive industry's safety and cybersecurity standards, including AI functional safety.
The lab has been accredited by the ANSI National Accreditation Board (ANAB) according to the ISO/IEC 17020 assessment standards.
"The launch of this new lab will help partners in the global automotive ecosystem create safe, reliable autonomous driving technology," said Ali Kani, vice president of automotive at Nvidia. "With accreditation by ANAB, the lab will carry out an inspection plan that combines functional safety, cybersecurity and AI — bolstering adherence to the industry's safety standards."
The new lab builds on Nvidia's ongoing safety compliance work with Mercedes-Benz and JLR. Initial participants in the lab include Continental and Sony SSS-America.
8. Toyota, Aurora and Continental Adopt Nvidia DRIVE Platform
Nvidia announced that Toyota, Aurora, and Continental have joined the list of global automakers building their vehicles on Nvidia's AI-powered DRIVE platform. Toyota will integrate DRIVE AGX Orin with DriveOS for driver-assistance, while Aurora and Continental will mass-produce driverless trucks using Nvidia DRIVE by 2027.
"The autonomous vehicle revolution has arrived, and automotive will be one of the largest AI and robotics industries," said Jensen Huang. "Nvidia is bringing two decades of automotive computing, safety expertise and its CUDA AV platform to transform the multitrillion dollar auto industry."
Other mobility companies adopting Nvidia DRIVE AGX for their advanced driver-assistance systems and autonomous vehicle roadmaps include BYD, JLR, Li Auto, Lucid, Mercedes-Benz, NIO, Nuro, Rivian, Volvo Cars, Waabi, Wayve, Xiaomi, ZEEKR, Zoox and many more, Nvidia announced.
Nvidia added that its automotive vertical business is expected to grow to approximately USD 5 billion in fiscal year 2026. The majority of auto manufacturers, truckmakers, robotaxi, and autonomous delivery vehicle companies, tier-one suppliers and mobility startups are developing on Nvidia DRIVE AGX platform and technologies.
"Just as computer graphics was revolutionised at such an incredible pace, you're going to see the pace of AV development increasing tremendously over the next several years," Huang said. These vehicles will offer functionally safe, advanced driving assistance capabilities.
Also Read: AWS to Invest USD 11 Billion in Georgia to Boost AI and Cloud Infrastructure
9. Nvidia and Partners Launch Agentic AI Blueprints
Nvidia and its partners launched Agentic AI blueprints to automate work for enterprises. With the blueprints, Nvidia said developers can now build and deploy custom AI agents.
These AI agents act like "knowledge robots" that can reason, plan and take action to quickly analyse large quantities of data, summarise and distill real-time insights from video, PDF and other images, Nvidia explained, noting that these AI agents can streamline workflows across industries such as automotive, manufacturing, and technology.
CrewAI, Daily, LangChain, LlamaIndex and Weights and Biases have worked with Nvidia to build blueprints that integrate the Nvidia AI Enterprise software platform. These five blueprints comprise a new category of partner blueprints for agentic AI, Nvidia said.
In addition to the partner blueprints, Nvidia introduced its own new AI Blueprint for PDF-to-Podcast, as well as another to build AI agents for video search and summarization. These are joined by four additional Nvidia Omniverse Blueprints for developers to build simulation-ready digital twins for physical AI.
These Blueprints can be deployed instantly using Nvidia Launchables and integrated into enterprise cloud or on-premises infrastructure, including AWS, Google Cloud, Microsoft Azure, Oracle Cloud Infrastructure, Dell, HPE, Lenovo, and Supermicro.
To accelerate adoption, Accenture AI Refinery is offering 12 new pre-built industry-specific AI solutions, helping businesses in consumer goods, life sciences, and industrial sectors rapidly implement AI-driven automation.
Accenture plans to launch over 100 AI Refinery for Industry agent solutions by the end of the year, Nvidia announced.
Developers can use Nvidia NIM microservices to build AI agents for tasks like customer support, fraud detection, and supply chain optimisation.
10. Nvidia Isaac GR00T Blueprint for Synthetic Motion Generation
Nvidia announced the launch of a collection of robot foundation models, data pipelines and simulation frameworks to accelerate humanoid robot development efforts, highlighting, "Over the next two decades, the market for humanoid robots is expected to reach USD 38 billion."
The Nvidia Isaac GR00T Blueprint for synthetic motion generation helps developers generate exponentially large synthetic motion data to train their humanoids using imitation learning.
Huang emphasised the importance of training robots efficiently, using Nvidia's Omniverse to generate millions of synthetic motions for humanoid training.
"The Mega blueprint powers large-scale simulations of robot fleets, enabling companies like Accenture and KION to revolutionise warehouse automation," Nvidia said.
"Collectively, Nvidia Isaac GR00T, Omniverse and Cosmos are helping physical AI and humanoid innovation take a giant leap forward," Nvidia said, noting, "Major robotics companies have started adopting and demonstrated results with Isaac GR00T, including Boston Dynamics and Figure."
11. Nvidia Says Media2 Transforms Content Creation
Nvidia says its technologies, including NIM microservices and AI Blueprints, are streamlining AI video pipelines and enhancing audience engagement while transforming the USD 3 trillion media industry.
Built on Nvidia Blackwell architecture and AI Enterprise software, Media2 enables personalised, immersive media experiences with generative AI, video search, and intelligent automation. "Media2 uses AI to drive the creation of smarter, more tailored and more impactful content that can adapt to individual viewer preferences," the company highlighted.
"Amid this rapid creative transformation, companies embracing Nvidia Media2 can stay on the USD 3 trillion media and entertainment industry's cutting edge, reshaping how audiences consume and engage with content," Nvidia said.
Nvidia Holoscan for Media is a software-defined, AI-enabled platform that allows companies in broadcast, streaming and live sports to run live video pipelines on the same infrastructure as AI.
Partners in the Media2 Ecosystem
Companies such as Getty Images, Shutterstock, Bria, Wonder Dynamics (an Autodesk company), Runway, Comcast, Vu, Twelve Labs, Monks of S4 Capital, Qvest, and Verizon are leveraging Nvidia's technologies to transform storytelling, live sports, and interactive entertainment, according to the company.
Verizon leverages a private 5G network with Nvidia's full-stack AI platform. It can also enable private 5G-powered enterprise AI use cases to drive automation and productivity, Nvidia said.
Also Read: Microsoft to Invest USD 3 Billion in India to Boost AI, Cloud, and Skilling: CEO
12. Hyundai Motor Group Partners with Nvidia
Nvidia said Hyundai Motor Group partnered with the company to develop safe, secure mobility with AI and industrial digital twins. The Group launched a range of AI initiatives into its key mobility products, including software-defined vehicles and robots, along with optimising its manufacturing lines.
"Hyundai Motor Group is exploring innovative approaches with AI technologies in various fields such as robotics, autonomous driving and smart factory," said Heung-Soo Kim, executive vice president and head of the global strategy office at Hyundai Motor Group. "This partnership is set to accelerate our progress, positioning the Group as a frontrunner in driving AI-empowered mobility innovation."
Hyundai Motor Group will tap into Nvidia's data-center-level computing and infrastructure to efficiently manage the massive data volumes essential for training its advanced AI models and building an autonomous vehicle (AV) software stack, the company said.
With the Nvidia Omniverse platform running on Nvidia OVX systems, Hyundai Motor Group will build a digital thread across its existing software tools to achieve highly accurate product design and prototyping in a digital twin environment. This will help boost engineering efficiencies, reduce costs and accelerate time to market, Nvidia explained.
The Group will also work with Nvidia to create simulated environments for developing autonomous driving systems and validating self-driving applications.
Also Read: Jio Delivers Data at 15 Cents a GB: Mukesh Ambani at Nvidia AI Summit 2024
Hyundai Motor Group will develop applications, like digital twins using Omniverse technologies, to optimize its existing and future manufacturing lines in simulation. These digital twins can improve production quality, streamline costs and enhance overall manufacturing efficiencies, the company explained.