Absolute Security with NVIDIA Confidential Computing

- Team Vast

March 16, 2024-Industry

Just this past year, NVIDIA introduced a groundbreaking security feature in the Hopper architecture to protect sensitive data, AI models, and applications in use. The new feature, called NVIDIA Confidential Computing, offers a hardware-based solution for securely processing data and code that is in use, preventing unauthorized access and modification. 

Whether deployed on-premises, at the edge, or on the cloud, your sensitive data can be vulnerable to both external attacks and internal threats. NVIDIA Confidential Computing gives you peace of mind knowing that the data and code you most want to protect are as secure as possible – while you also get to enjoy the cutting-edge acceleration of NVIDIA H200 and H100 Tensor Core GPUs.

What Is NVIDIA Confidential Computing?

To put it simply, "confidential computing" (CC) protects data-in-use by performing computation in a trusted execution environment (TEE) that is both hardware-based and attested. 

The NVIDIA H200 and H100 GPUs have a CVM (confidential virtual machine) TEE on the CPU that is anchored in an on-die root of trust (RoT). When it boots in CC-On mode, hardware protections that provide confidentiality and integrity of code and data are enabled. (In this context, "confidentiality" means that an attacker cannot access any data or code, and "integrity" means that they cannot modify the execution.) 

Here's how it works:

  1. A chain of trust is established via a GPU boot sequence with a secure and measured boot.
  1. An SPDM (Security Protocol and Data Model) session facilitates a secure connection to the driver in a CPU TEE. 
  1. An attestation report with a cryptographically signed set of measurements is generated, verifying system integrity.

In order to run a confidential workload, the GPU must be authenticated as a legitimate NVIDIA GPU that supports confidential computing (a process involving a device-unique private key fused onto each GPU corresponding to a certified public key); the GPU must not have been revoked for CC; and the GPU attestation report must be verified. 

Upon successful verification, the NVIDIA driver in the CVM establishes a secure channel, via a session key, with the GPU hardware TEE in order to transfer data, perform computation, and retrieve results. The CVM and GPU communicate through a shared memory region outside the CVM since the CPU prevents the GPU from directly accessing a CVM's memory. They use AES-GCM – an advanced encryption standard – to prevent the host system from reading this communication. 

Finally, the GPU copies and decrypts all inputs to its internal memory, where everything runs in plaintext and direct access is blocked. Performance counters are disabled as well, since these could be used for side-channel attacks. 

In this way, a confidential-computing environment is extended from a CVM (or a secure enclave) to a GPU, with attestation, encrypted communication, and memory isolation making it possible. 

Benefits & Positive Impacts

The benefits of NVIDIA Confidential Computing are significant:

  • Hardware-Based Security and Isolation – You can achieve full isolation of virtual machines (VMs) in any environment – on-premises, at the edge, or on the cloud. Your entire workload is secured with these built-in hardware firewalls that provide unprecedented protection. 
  • Verifiability with Device Attestation – You will always rest assured that only authorized end users can deploy data and code for execution in the H200 or H100's TEE. Plus, through device attestation, you'll know that you're dealing with an authentic NVIDIA GPU, the firmware hasn't been modified, and updates have been done as expected. 
  • Protection from Unauthorized Access – Your sensitive data, AI workloads, and intellectual property are protected, with confidentiality and integrity preserved at all times. All unauthorized entities (including the hypervisor, cloud provider, host operating system, and even anyone with physical access to the infrastructure) are blocked from viewing or modifying the AI application and data during execution. 
  • No Application Code Change – As NVIDIA puts it, the new confidential computing feature "just works," without any code changes required for your GPU-accelerated workloads. 

As you can imagine, confidential computing-enabled GPUs usher in a whole host of use cases where security, privacy, and regulatory compliance are paramount. For instance, intellectual property protection is now possible for proprietary AI models, even when these models are distributed and deployed at scale on shared or remote infrastructure. 

Privacy and confidentiality can also now be preserved in AI training and inference – most notably in industries like healthcare, finance, and the public sector, where data may be sensitive and/or regulated. On top of that, collaboration between multiple parties can be done securely when building and improving AI models across participating sites, for use cases like medical imaging, drug development, and fraud detection. 

Another bonus: organizations, governments, and individuals can outsource AI workloads to a cloud provider while still ensuring protection from the underlying infrastructure. 

Final Thoughts

It's exciting that confidential computing is now possible for even the most demanding workloads! Granted, for the vast majority of use cases, CC-enabled GPUs may arguably be overkill. But for those who need or prefer absolute confidentiality and security, NVIDIA Confidential Computing in the Hopper architecture can now provide that for you. 

At, we understand how important privacy and security can be for our customers. It is our hope that you'll find exactly what you need on our cost-effective, on-demand cloud GPU rental platform. 

In our global network of compute providers on Vast, some hosts are experimenting with NVIDIA Confidential Computing. Complete support for this feature is not yet available. Join our Discord for the latest product updates and up-to-date support.

Share on
  • Contact
  • Get in Touch