Advanced Micro Devices (AMD) reportedly outlined the company's vision for an open-source, hardware-agnostic software ecosystem to accelerate artificial intelligence (AI) development, during CEO Lisa Su's address at the Indian Institute of Science (IISc). AMD, a key competitor to Nvidia in generative AI chips, is focusing on holistic system design to balance performance, power, and cooling constraints in advanced data centers, according to a Moneycontrol report.
Also Read: AMD to Lay Off 4 percent Workforce to Focus on AI Chip Development: Report
AMD's Commitment to Open-Source AI Ecosystem
"I think the desire for getting to hardware-agnostic programming environment is very high, and some of it comes from us as hardware vendors, then a lot of it comes from the users on the application standpoint. I do think that they're going to get more adoption because they're being driven by the leading AI companies out there," said Lisa Su, chief executive officer and chairman of AMD, during a fireside chat at the IISc, Bengaluru, according to the report.
Su, who is also the first cousin of Nvidia CEO Jensen Huang, reportedly said that the world needs an open-source software environment. "It shouldn't matter whether it's AMD or Nvidia or ABC as the hardware layer you want to build on top of that, and with the software under the abstraction."
Hardware-Agnostic Programming for AI
Su reportedly explained AMD's substantial investments in tools, compilers, and abstraction layers to support this vision, going forward. "We're investing significantly in all of the tools and compilers and the abstraction layers that will allow you to build this open-source ecosystem, and I would say that there's tremendous support in the industry for this type of more open ecosystem," she added, the report further said.
Also Read: Amazon Invests USD 110 Million to Boost AI Research with Free Access to Trainium Chips
Su emphasised the importance of holistic system design in AI, stressing the need to balance performance, power, and cooling constraints, the report said. "When you talk about the environment that we have to build, it's one where we consider all of the constraints, which include performance, power, and the overall cooling of these machines."
PyTorch's Rapid Growth
Su also noted the rapid growth of PyTorch, an open-source machine learning library, which now supports over a million models—triple the number from last year. PyTorch, initially developed by Meta AI and now managed under the Linux Foundation, is widely used in applications such as computer vision and natural language processing.
"The fact that PyTorch now runs more than a million models. You had asked me that question 12 months ago, it was probably a third of that number. So it's ramping up very quickly," Su reportedly said.
Also Read: Vodafone and AMD Collaborate to Develop Next-Gen Energy-Efficient AI-Enabled Base Stations
Foundation Models Supporting Diverse Hardware
According to the report, she also highlighted that major foundation models, including the newly released Llama models, have been operational across AMD, Nvidia, and other hardware platforms from day one, reflecting the industry's push for easier adoption of diverse hardware solutions.