Foxconn Unveils Its First LLM FoxBrain to Optimise Manufacturing

Foxconn Unveils Its First LLM FoxBrain to Optimise Manufacturing
Taiwan’s Foxconn has launched its first Large Language Model (LLM), FoxBrain, to enhance manufacturing and supply chain operations. Based on Meta’s Llama 3.1, the AI model features 70 billion parameters and a 128k-token context window. It was trained in four weeks using 120 Nvidia H100 GPUs, scaled with Nvidia Quantum-2 InfiniBand networking, and employs adaptive reasoning reflection to improve efficiency.

  • Make Telecom Talk My Trusted Source
  • Source of Google
  • Source of Google

Also Read: Tech Mahindra Launches AI Network Automation Model for Telcos With Nvidia and AWS

Foxconn Unveils FoxBrain

“The institute [Hon Hai Research Institute], backed by Hon Hai Technology Group (Foxconn), the world’s largest electronics manufacturer and technological solutions provider, said the LLM – code-named FoxBrain – will be open-sourced and shared publicly in the future. It was originally designed for applications used in the Group’s internal systems, covering functions such as data analysis, decision support, document collaboration, mathematics, reasoning and problem-solving, and code generation.” Foxconn said on March 10, 2025.

“In recent months, the deepening of reasoning capabilities and the efficient use of GPUs have gradually become the mainstream development in the field of AI. Our FoxBrain model adopted a very efficient training strategy, focusing on optimising the training process rather than blindly accumulating computing power,” said Yung-Hui Li, Director of the Artificial Intelligence Research Center at Hon Hai Research Institute. “Through carefully designed training methods and resource optimisation, we have successfully built a local AI model with powerful reasoning capabilities.”

Also Read: IBM and L’Oreal to Build AI Model for Creation of Sustainable Cosmetics