Tech Mahindra, an Indian IT services and consulting company, has signed a multi-year strategic agreement with Amazon Web Services (AWS) to develop an Autonomous Networks Operations Platform (ANOP). The platform, built on artificial intelligence (AI), machine learning (ML), and generative AI (GenAI) services powered by AWS, is designed for Communication Service Providers (CSPs) and enterprise customers.
Also Read: Dell Expands AI for Telecom Program with Intel to Enhance Network Operations and Edge Solutions
Tech Mahindra and AWS Partnership
The ANOP platform will enable customers to transition their network operations from an on-premises infrastructure to a real-time proactive and preventive model operating on a hybrid cloud, Tech Mahindra announced on Wednesday.
The collaboration combines Tech Mahindra's GenAI capabilities and telecom networks expertise with Amazon SageMaker, a service for building, training, and deploying machine learning models for any industry use case with fully managed infrastructure, tools, and workflows.
According to Tech Mahindra, the ANOP platform will enable CSPs to enhance Network Operations Center (NOC) productivity for teams managing physical and cloud infrastructure by more than 50 percent, reduce field visits by over 15 percent and shorten the Mean Time to Repair (MTTR) for network and service incidents by more than 30 percent.
Additionally, the collaboration includes Amazon Bedrock, a fully managed service providing foundation models (FMs) from AI enterprises via a single API. It offers essential capabilities for building GenAI applications with security, privacy, and responsible AI practices.
"Our collaboration with AWS empowers telcos to simplify operations, modernise networks, and unlock revenue through advanced artificial intelligence and machine learning," said Manish Mangal, Chief Technology Officer, Telecom and Global Business Head, Network Services at Tech Mahindra.
By integrating AWS's GenAI, the Autonomous Networks Operations Platform delivers real-time insights, intelligent workflows, and supports Open Ran (O-RAN) adoption for efficient, proactive network management, Mangal said.
Tech Mahindra is also testing and validating O-RAN functions on Amazon's EKS Anywhere (EKS-A) platform. The joint testing and validation of the platform for Distributed Unit and Central Unit network functions will accelerate the cloudification of RAN at the edge.
"Through this effort, network operators can get generative AI-enabled actionable and just-in-time recommendations such as for NOC operations, field dispatch optimization, as well as automated self-healing for preventive actions," said Robin Harwani, Head of Telco Industry Solutions at AWS.
This will make it easier for operators to more proactively manage and optimise network performance and reduce their operational expenditure, Harwani added.
Tech Mahindra revealed it is currently implementing ANOP for a leading communications provider in Europe to enhance network operations through process automation and optimisation.
Also Read: SoftBank and Nvidia Build AI-Powered 5G Network Using AI Aerial
Tech Mahindra Launches agentX
On November 19, Tech Mahindra introduced TechM agentX—a suite of GenAI-powered solutions to drive intelligent automation and enhance enterprise efficiency globally. The solutions will address inefficiencies in traditional operations, enabling enterprises to achieve enhanced productivity, scalability, and user experience.
Through these solutions, enterprises can automate complex business, IT, and data tasks, improving productivity by up to 70 percent, Tech Mahindra said. The first solution in the suite, agentAssistX, is a GenAI-powered, agentless business, IT, and end-user support solution aimed at unifying and optimising support silos.
This will make IT support faster, easier, scalable, and more efficient. In addition, agentAssistX can integrate with ITSM (service management) software, enterprise security, network telemetry data, and cloud management tools to automate ticket resolution and provisioning.
"With the launch of TechM agentX, our vision of AI-driven enterprise automation takes a significant leap forward. The implementation of these solutions will be a game-changer for enterprises. By automating intricate processes and significantly enhancing productivity, agentAssistX will provide a cohesive method for ensuring seamless user experiences and scalability across various systems," said Kunal Purohit, President – Next Gen Services, Tech Mahindra.
Tech Mahindra Opens AI CoE with Nvidia
In another development, on October 24, Tech Mahindra announced a Center of Excellence (CoE) powered by Nvidia platforms to advance sovereign large language model (LLM) frameworks, agentic AI, and physical AI.
Based on the Tech Mahindra Optimised Framework, the CoE leverages the Nvidia AI Enterprise software platform — including NeMo, NIM microservices and RAPIDS — to offer customised, enterprise-grade AI applications to help its clients adopt agentic AI in their businesses. Agentic AI significantly improves productivity by enabling AI applications to learn, reason, and take action, Tech Mahindra said.
The CoE also uses the Nvidia Omniverse platform to develop connected industrial AI digital twins and physical AI applications across various sectors, including manufacturing, automotive, telecommunications, healthcare, banking, financial services and insurance.
Leveraging the capabilities of the CoE, Tech Mahindra said it has also developed Project Indus 2.0, an advanced AI model powered by Nvidia NeMo based on Hindi and dozens of its dialects, such as Bhojpuri, Dogri, and Maithili. Project Indus 2.0 caters to diverse sectors, including retail, banking, healthcare, and citizen services, in India.
In the future, Indus 2.0 aims to include agentic workflows and support multiple dialects to provide a more nuanced and effective AI solution tailored to India's linguistic and cultural landscape.
Also Read: Jio Delivers Data at 15 Cents a GB: Mukesh Ambani at Nvidia AI Summit 2024
Small LLM for Hindi
Nvidia has released a small language model for Hindi, dubbed Nemotron-4-Mini-Hindi-4B, which is available as a NIM microservice. The Nemotron Hindi model features 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion-parameter multilingual language model developed by Nvidia. According to Nvidia, Tech Mahindra is the first to use the Nemotron Hindi NIM microservice to develop the Indus 2.0 AI model. The company also leverages Nvidia NeMo to develop its sovereign large language model (LLM) platform, TeNo.
Atul Soneja, Chief Operating Officer, Tech Mahindra, said, "At Tech Mahindra, we are redefining the boundaries of AI innovation. Collaborating with Nvidia, we are setting a new benchmark for enterprise-grade AI development by seamlessly integrating GenAI, industrial AI and sovereign large language models into the heart of global enterprises and industries."
Tech Mahindra will also leverage the new Nvidia NIM Agent Blueprint for customer service to help call center clients build custom AI virtual assistants that can aid human agents in rapidly resolving issues.
John Fanelli, Vice President, Enterprise Software at Nvidia, said, "Built with Nvidia technology, Tech Mahindra's Center of Excellence will accelerate the development and adoption of sovereign AI LLMs and applications tailored for India's diverse industries and linguistic landscape."
The CoE is located within Tech Mahindra's Makers Lab in Pune and Hyderabad.