Saudi Arabia's stc Group has partnered with AI solutions company SambaNova to introduce an inferencing-as-a-Service cloud platform to provide Saudi enterprises with sovereign AI capabilities. Through its AI arm, stc.AI, stc Group launched a Large Language Model (LLM) sovereign cloud platform, which will run what the company calls the world’s largest open-source frontier model. "Powered by the fastest inference speeds for Llama 405B, one of the most powerful AI large language models in the world, the stc Group sovereign cloud platform will drive innovation across sectors," the companies announced on Monday, February 17.
Also Read: Airtel Working on India’s First AI-Enabled Sovereign Cloud Solution
Key Features of the stc Sovereign Cloud Platform
Key features of the platform include stc Enterprise GPT, a state-of-the-art generative AI solution. The generative component will power AI to create new content, using the fastest inference speeds - for Llama 405B. "The platform will ensure seamless integration and scalability for enterprises," the companies said.
According to Stc Group, the open-source model will allow users within Saudi Arabia to use, modify, and improve the software according to their specific needs, contributing to stc Group's own Enterprise GPT.
Sovereign AI Ecosystem
Saud Alsheraihi, Vice President of Digital Solutions at stc Group said: "By offering a secure and scalable inferencing-as-a-Service platform, we are enabling organizations to unlock the full potential of their data while maintaining complete control."
"SambaNova is pleased to partner with stc to introduce KSA’s premier sovereign inferencing-as-a-service cloud, running the world’s largest open-source frontier models at one-tenth the power compared to other solutions," said Rodrigo Liang, CEO of SambaNova Systems.
"The collaboration will empower sovereign AI entities to fine-tune models using private data while leveraging SambaNova's advanced technology," the joint statement said.
Also Read: Stc Group Partners with AWS to Drive AI and Cloud Transformation in Saudi Arabia
Inference speed
Inference speed refers to the time it takes for an AI model to process an input and generate an output after it has been trained. It is a crucial factor for determining how quickly the model can generate new content based on a given prompt.