Skip to content

Confidential Compute

NVIDIA Confidential Computing is a hardware-based security technology that protects data, AI models, and RAM operations (e.g., LLM prompts) while they are being processed in a Confidential VM (CVM). The CVM isolates your workload from the host hypervisor, meaning you do not need to trust the Cloud Service Provider.

How it works:

  • Runs in a VM TEE (Trusted Execution Environment) with CPU encryption enabled (AMD SEV-SNP)
  • The Trust Boundary: establishes a secure perimeter that includes the CPU and the GPU but excludes the rest of the system (OS, Hypervisor, Cloud Provider)
  • Encrypted Transfers: data is transferred from the CPU to the GPU over an encrypted PCIe line. The GPU only decrypts the data once it is safely inside its own protected memory
  • Attestation: before any data is sent, you can cryptographically verify (attest) that the GPU is genuine, the firmware is unmodified, and the secure environment is active

Supported Hardware

Verda currently supports confidential compute on the NVIDIA RTX PRO 6000 (Single GPU). Support for additional Blackwell GPUs is planned for the future:

GPU Model Configuration Availability
NVIDIA RTX PRO 6000 Single GPU Available
NVIDIA B200 Single GPU Coming soon
NVIDIA B200 Multi GPU Coming soon

Info

RTX PRO 6000 Multi GPU is not supported. B300 is not yet supported.


Protecting Your Confidential Data

When running a Confidential VM, you can create a user with an encrypted home folder so that your data at rest is also protected.

Create an encrypted user

adduser --encrypt-home newusername
sudo usermod -aG sudo newusername

Log in to your user with login (this will ask for your password and decrypt the home folder):

login newusername

Verify home folder encryption

Create a test file from your user session:

echo "secret 123" > ~/test.txt

Then exit and inspect the home folder as root. If encryption is working correctly, you will not see test.txt but instead encrypted directory entries:

$ exit
root@ncc-vm:~# ls -lah /home/newusername/
total 8.0K
dr-x------ 2 newusername newusername   4.0K Mar  2 14:25 .
drwxr-xr-x 4 root        root          4.0K Mar  2 14:44 ..
lrwxrwxrwx 1 newusername newusername   33 Mar  2 14:25 .Private -> /home/.ecryptfs/newusername/.Private
lrwxrwxrwx 1 newusername newusername   34 Mar  2 14:25 .ecryptfs -> /home/.ecryptfs/newusername/.ecryptfs

System Attestation

Attestation lets you verify that your instance is running with full confidential computing protections enabled.

Verify CPU RAM encryption

sudo dmesg | grep -i sev-snp
[    1.913695] Memory Encryption Features active: AMD SEV SEV-ES SEV-SNP

Verify GPU confidential compute mode

nvidia-smi conf-compute -q

==============NVSMI CONF-COMPUTE LOG==============

    CC State                   : ON
    Multi-GPU Mode             : None
    CPU CC Capabilities        : AMD SEV-SNP
    GPU CC Capabilities        : CC Capable
    CC GPUs Ready State        : Not Ready

Run NVIDIA remote attestation

Run the NVIDIA attestation to prove to a remote party that the GPU is genuine, has not been tampered with, and is running in confidential mode:

nvattest attest --device gpu --verifier remote

Set the GPU ready state

The GPU will not accept any workload until a user inside the CVM sets the ReadyState. This prevents accidental usage before attestation is complete.

Successfully passing remote attestation (see above) automatically sets the ready state. You can also set it manually:

nvidia-smi conf-compute -srs 1

Warning

Do not set the ready state before verifying attestation. The restriction exists to ensure the GPU's integrity has been confirmed.


Booting a Custom OS

By default, Verda confidential compute instances are provisioned with Ubuntu 24.04. If you need a different OS (e.g., Ubuntu 25.10), you can deploy a custom image using kexec to boot from a secondary volume without modifying the original OS.

How it works

  1. Locally: build a ready-to-boot raw image from an Ubuntu 25.10 cloud image with your SSH key and network config baked in
  2. On the instance: write the image to the empty secondary volume (vdb) and kexec into its kernel, bypassing UEFI/GRUB on the primary volume

The original OS on vda is never modified and serves as a fallback.

Prerequisites

  • A Verda instance with two volumes: the provisioned OS (vda) and an empty volume (vdb)
  • SSH access to the instance as root
  • A Linux machine with libguestfs-tools and qemu-utils for image building

1. Configure

cp .env.example .env
# Edit .env: set your instance IP and SSH public key
source .env
.env.example
export INSTANCE_IP=<your-instance-ip>
export SSH_KEY="<your-ssh-public-key>"

2. Install dependencies (local machine)

./install_deps.sh
install_deps.sh
#!/bin/bash
# Install dependencies on the local machine for building the image
set -e

sudo apt-get update
sudo apt-get install -y libguestfs-tools qemu-utils

3. Build the image (local machine)

./01_build_image.sh
01_build_image.sh
#!/bin/bash
# Downloads Ubuntu 22.04 cloud image, injects SSH key, network config,
# and first-boot partition resize. Outputs a raw image ready for dd.
#
# Requires: source .env (must set SSH_KEY)
# Requires: libguestfs-tools, qemu-utils (see install_deps.sh)
set -e

if [ -z "$SSH_KEY" ]; then
  echo "Error: SSH_KEY is not set. Run: source .env" >&2
  exit 1
fi

IMAGENAME=questing-server-cloudimg-amd64.img
RAWNAME=questing-server-raw.img

# Download cloud image
wget -O $IMAGENAME https://cloud-images.ubuntu.com/questing/current/$IMAGENAME

# Customize image
virt-customize -a $IMAGENAME \
  --root-password disabled \
  --ssh-inject root:string:"$SSH_KEY" \
  --run-command 'ssh-keygen -A' \
  --run-command 'sed -i "s/#\?PermitRootLogin.*/PermitRootLogin prohibit-password/" /etc/ssh/sshd_config' \
  --mkdir /etc/netplan \
  --write /etc/netplan/01-netcfg.yaml:'network:
  version: 2
  ethernets:
    eth0:
      dhcp4: true
      critical: true' \
  --run-command 'sed -i "s/^GRUB_CMDLINE_LINUX=.*/GRUB_CMDLINE_LINUX=\"net.ifnames=0 biosdevname=0 fsck.mode=auto fsck.repair=yes\"/" /etc/default/grub' \
  --run-command 'update-grub' \
  --run-command 'systemctl disable cloud-init cloud-init-local cloud-config cloud-final' \
  --firstboot-command 'ROOT_DEV=$(findmnt -n -o SOURCE /); DISK=$(lsblk -n -o PKNAME $ROOT_DEV); PARTNUM=$(echo $ROOT_DEV | grep -o "[0-9]*$"); growpart /dev/$DISK $PARTNUM && resize2fs $ROOT_DEV' \
  --truncate /etc/machine-id

# Convert to raw for dd
qemu-img convert -f qcow2 -O raw $IMAGENAME $RAWNAME

echo "Image ready: $RAWNAME ($(du -h $RAWNAME | cut -f1))"

This downloads the Ubuntu 25.10 cloud image, injects your SSH key, configures DHCP networking, disables cloud-init, and adds a first-boot partition resize service. Produces questing-server-raw.img (~2 GB).

4. Upload and write image (on the instance)

scp questing-server-raw.img 02_write_image.sh 03_kexec_boot.sh root@$INSTANCE_IP:/root/
ssh root@$INSTANCE_IP "apt-get install -y kexec-tools && bash /root/02_write_image.sh"
02_write_image.sh
#!/bin/bash
# Writes the raw image to /dev/vdb.
# Run this script on the Verda instance (not locally).
set -e

dd if=./questing-server-raw.img of=/dev/vdb bs=4M status=progress conv=fsync

5. Boot into the custom OS (on the instance)

ssh root@$INSTANCE_IP "bash /root/03_kexec_boot.sh"
03_kexec_boot.sh
#!/bin/bash
# Boots into the OS on /dev/vdb via kexec.
# Run this script on the Verda instance (not locally).
#
# Requires: kexec-tools (apt-get install -y kexec-tools)
set -e

# Find root partition on vdb (largest ext4)
partprobe /dev/vdb 2>/dev/null
sleep 1
ROOT_PART=$(lsblk -ln -o NAME,FSTYPE,SIZE /dev/vdb | awk '$2=="ext4"' | sort -h -k3 | tail -1 | awk '{print $1}')
ROOT_DEV=/dev/$ROOT_PART

# Mount root
mkdir -p /mnt/newroot
mount $ROOT_DEV /mnt/newroot
ROOT_UUID=$(blkid -s UUID -o value $ROOT_DEV)

# Check for kernel in root, otherwise mount separate boot partition
BOOT_DIR=/mnt/newroot/boot
if ! ls $BOOT_DIR/vmlinuz-* >/dev/null 2>&1; then
  BOOT_PART=$(lsblk -ln -o NAME,FSTYPE,SIZE /dev/vdb | awk '$2=="ext4"' | sort -h -k3 | head -1 | awk '{print $1}')
  if [ "$BOOT_PART" != "$ROOT_PART" ]; then
    BOOT_DIR=/mnt/newboot
    mkdir -p $BOOT_DIR
    mount /dev/$BOOT_PART $BOOT_DIR
  fi
fi

KERNEL=$(ls $BOOT_DIR/vmlinuz-* | sort -V | tail -1)
INITRD=$(ls $BOOT_DIR/initrd.img-* | sort -V | tail -1)

echo "Kernel:  $KERNEL"
echo "Initrd:  $INITRD"
echo "Root:    UUID=$ROOT_UUID"

# Load kernel
kexec -l "$KERNEL" --initrd="$INITRD" \
  --command-line="root=UUID=$ROOT_UUID ro net.ifnames=0 biosdevname=0 fsck.mode=auto fsck.repair=yes"

umount $BOOT_DIR 2>/dev/null || true
umount /mnt/newroot 2>/dev/null || true

# Unload NVIDIA kernel modules before kexec so the new kernel gets clean GPUs.
# Without this, the GPU's GSP firmware remains initialized from the
# current boot and the new kernel fails with "unexpected WPR2 already up".
systemctl stop nvidia-persistenced 2>/dev/null || true
killall -9 nvidia-persistenced 2>/dev/null || true
for MOD in nvidia_uvm nvidia_drm nvidia_modeset nvidia; do
  rmmod $MOD 2>/dev/null && echo "Unloaded $MOD" || true
done

kexec -e

The SSH connection will drop when kexec reboots the kernel. Wait ~15 seconds, then reconnect:

ssh root@$INSTANCE_IP

You should now be running Ubuntu 25.10 from vdb.

6. Make it permanent (optional)

By default, a hard-reboot from the Verda console boots back into the primary OS on vda. To make it automatically kexec into the custom OS on every reboot:

scp 04_make_permanent.sh root@$INSTANCE_IP:/root/
04_make_permanent.sh
#!/bin/bash
# Installs a systemd service on the primary OS (vda) that automatically
# kexec-boots into the secondary OS (vdb) on every boot.
# Run this script on the Verda instance while booted into the primary OS (vda).
#
# Requires: kexec-tools (apt-get install -y kexec-tools)
set -e

# Create the kexec boot script
cat > /usr/local/bin/kexec-vdb.sh << 'SCRIPT'
#!/bin/bash
set -e

ROOT_PART=$(lsblk -ln -o NAME,FSTYPE,SIZE /dev/vdb | awk '$2=="ext4"' | sort -h -k3 | tail -1 | awk '{print $1}')
ROOT_DEV=/dev/$ROOT_PART

mkdir -p /mnt/newroot
mount $ROOT_DEV /mnt/newroot
ROOT_UUID=$(blkid -s UUID -o value $ROOT_DEV)

BOOT_DIR=/mnt/newroot/boot
if ! ls $BOOT_DIR/vmlinuz-* >/dev/null 2>&1; then
  BOOT_PART=$(lsblk -ln -o NAME,FSTYPE,SIZE /dev/vdb | awk '$2=="ext4"' | sort -h -k3 | head -1 | awk '{print $1}')
  if [ "$BOOT_PART" != "$ROOT_PART" ]; then
    BOOT_DIR=/mnt/newboot
    mkdir -p $BOOT_DIR
    mount /dev/$BOOT_PART $BOOT_DIR
  fi
fi

KERNEL=$(ls $BOOT_DIR/vmlinuz-* | sort -V | tail -1)
INITRD=$(ls $BOOT_DIR/initrd.img-* | sort -V | tail -1)

kexec -l "$KERNEL" --initrd="$INITRD" \
  --command-line="root=UUID=$ROOT_UUID ro net.ifnames=0 biosdevname=0 fsck.mode=auto fsck.repair=yes"

umount $BOOT_DIR 2>/dev/null || true
umount /mnt/newroot 2>/dev/null || true

# Unload NVIDIA modules so the new kernel gets clean GPUs
systemctl stop nvidia-persistenced 2>/dev/null || true
killall -9 nvidia-persistenced 2>/dev/null || true
for MOD in nvidia_uvm nvidia_drm nvidia_modeset nvidia; do
  rmmod $MOD 2>/dev/null || true
done

kexec -e
SCRIPT
chmod +x /usr/local/bin/kexec-vdb.sh

# Create systemd service
cat > /etc/systemd/system/kexec-vdb.service << 'SERVICE'
[Unit]
Description=Kexec boot into secondary OS on vdb
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/kexec-vdb.sh

[Install]
WantedBy=multi-user.target
SERVICE

systemctl daemon-reload
systemctl enable kexec-vdb.service

echo "Permanent kexec boot enabled. The instance will automatically boot into vdb on every reboot."

Hard-reboot the instance from the Verda console to get back into the primary OS, then:

ssh root@$INSTANCE_IP "bash /root/04_make_permanent.sh"

From now on, every time the instance boots into vda, it will automatically kexec into vdb within seconds.

Recovery

If the kexec boot fails, the original Ubuntu 24.04 on vda is untouched. Use the Verda console to hard-reboot back into the provisioned OS, then re-run 03_kexec_boot.sh.

If step 6 was applied and you want to disable auto-kexec, run from the primary OS:

systemctl disable kexec-vdb.service