Cerence, a voice AI company headquartered in Burlington, Massachusetts, announced on Friday an expanded partnership with Nvidia to enhance its CaLLM (Cerence Automotive Large Language Model) family of language models. This includes its cloud-based Cerence Automotive Large Language Model (CaLLM) and the CaLLM Edge embedded small language model. The collaboration aims to accelerate the development and deployment of generative AI for automotive applications, integrating both cloud-based and embedded solutions such as CaLLM Edge.
Also Read: Qualcomm Partners with Google to Transform Automotive Cockpits with Generative AI
Cerence-Nvidia Partnership
As part of the partnership, CaLLM leverages Nvidia AI Enterprise, a cloud-native software platform, while some aspects of CaLLM Edge are powered by Nvidia DRIVE AGX Orin. These technologies are used to optimise the performance, speed, and security of in-car systems. By working alongside Nvidia's hardware and software engineers, Cerence AI said it has enhanced its ability to meet production timelines and deploy generative AI solutions for automotive applications.
Leveraging Nvidia Technologies
Cerence AI specifically said it has accelerated the development and deployment of CaLLM by leveraging the Nvidia AI Enterprise software platform, including TensorRT-LLM and NeMo, a framework for building, customising, and deploying generative AI applications into production.
Benefits of the Collaboration
Incorporating Nvidia technologies has enabled Cerence to enable faster assistant response times, better privacy, and robust safeguards like NeMo Guardrails for safer driver interactions, the official release said.
Overall, this expanded collaboration with Nvidia equips Cerence AI with scalable tools and resources to develop next-generation user experiences in partnership with its automaker customers, Cerence said in a statement on January 3.
Also Read: SoundHound AI and Lucid Motors Unveil Generative AI-Integrated In-Vehicle Voice Assistant
"By optimising the performance of our CaLLM family of language models, we are delivering cost savings and improved performance to our automaker customers, who are running quickly to deploy generative AI-powered solutions to their drivers," said Nils Schanz, Executive Vice President of Product and Technology at Cerence AI. "As we advance our next-gen platform, with CaLLM as its foundation, these advanced capabilities will deliver faster, more reliable interaction to drivers, enhancing their safety, enjoyment and productivity on the road."
"Large language models are offering vast, new user experiences, but complexities in size and deployment can make it difficult for developers to get AI-powered solutions into the hands of end users," said Rishi Dhall, Vice President of Automotive at Nvidia. "Through this expanded collaboration, Cerence AI is deploying advanced Nvidia AI and accelerated computing technologies to optimize its LLM development and deployment."