EQTY Lab, in collaboration with Intel and Nvidia, announced on December 18 the launch of Verifiable Compute AI framework, a hardware-based solution designed to ensure AI workflows are secure, accountable, and compliant with emerging regulations like the EU AI Act.
Also Read: EU Selects Seven Sites for First AI Factories, Marking EUR 1.5 Billion Investment
Verifiable Compute AI Framework
Verifiable Compute introduces a cryptographic AI notary and certificate system embedded directly in hardware such as Nvidia's GPUs and Intel’s 5th Gen Xeon Processors. It creates tamperproof records of AI training and inference, enforces real-time policy compliance, and issues audit-ready certificates for regulatory adherence, according to the statement.
Recent studies reveal that 91 percent of organisations have faced supply chain attacks targeting traditional software systems—a challenge that becomes even more critical with AI agents automating tasks with minimal supervision.
"As a new era of autonomous AI agents emerges, we must evolve our trust in AI systems," said Jonathan Dotan, Founder of EQTY Lab. "Verifiable Compute protects and controls AI data, models, and agents with the industry's most advanced cryptography. It transforms how organisations enforce AI governance, automate auditing, and collaborate to build safer and more valuable AI."
"Intel is pushing the boundaries on delivering Confidential AI from edge to cloud, and EQTY Lab provides another level of trust to the confidential computing ecosystem," said Anand Pashupathy, VP and General Manager, Security Software and Services Division, Intel Corporation. "Adding Verifiable Compute to Confidential AI deployments helps companies enhance the security, privacy, and accountability of their AI solutions."
Also Read: Verizon and Nvidia Unveil AI-Powered Private 5G Network Solution for Enterprises
Tackling AI Supply Chain Risks
Verifiable Compute tackles critical risks to AI supply chains from AI poisoning, information extraction, privacy backdoors and denial-of-service attacks by securing every stage of the AI lifecycle. Its cryptographic framework ensures compliance while providing a safeguard against AI system failures, supply chain attacks, and regulatory violations.
"The true potential of AI won’t be fully realised until we can provide confidential computing to verify every component in the stack," said Michael O'Connor, Nvidia Chief Architect for Confidential Computing, Nvidia. "Securing the trust boundary in the processor sets a standard for next-generation AI workloads to be cryptographically secure and verifiable."
Integration and Market Impact
With native connectors to enterprise platforms like ServiceNow, Databricks, and Palantir, Verifiable Compute simplifies integration for businesses. The market for confidential computing, driven by increasing regulatory demands, is projected to reach USD 184.5 billion by 2032, according to the statement.