The Goal
I run a Plex Media Server inside an Ubuntu 24.04 VM on Proxmox. Software transcoding works, but it absolutely hammers the CPU — especially with HEVC and 4K content. The Proxmox host has an Intel Arc Graphics integrated GPU (Meteor Lake-P, PCI ID 8086:7d45) sitting idle, and Intel's Quick Sync is one of the best hardware transcoders out there. The plan: pass the iGPU through to the VM so Plex can use it for hardware-accelerated transcoding of H.264, HEVC, VP9, and AV1.
Sounds straightforward. It mostly was — with a few detours.
The Setup
| Component | Detail |
|---|---|
| Proxmox | 9.1.5, kernel 6.17.9-1-pve |
| GPU | Intel Arc Graphics (Meteor Lake-P) — PCI 0000:00:02.0, ID 8086:7d45 |
| VM | ID 102, Ubuntu 24.04 LTS |
| Plex | 1.43.0.10492 (native systemd service) |
Identifying Your GPU
Before anything else, you need to find your GPU's PCI address and device ID. These show up repeatedly throughout the configuration — in VFIO bindings, kernel parameters, and VM passthrough settings.
# Find the GPU and its PCI address
lspci | grep -i vga
# → 00:02.0 VGA compatible controller: Intel Corporation Meteor Lake-P [Intel Graphics] (rev 08)
# Get the numeric vendor:device ID
lspci -nn | grep "00:02.0"
# → 00:02.0 VGA compatible controller [0300]: Intel Corporation Meteor Lake-P [Intel Graphics] [8086:7d45] (rev 08)The [8086:7d45] at the end is what matters: 8086 is Intel's PCI vendor ID (named after the original Intel 8086 processor), and 7d45 is the specific device ID for this Meteor Lake-P integrated GPU. You'll see these IDs referenced throughout this guide — in vfio-pci ids=8086:7d45 on the host, and i915.force_probe=7d45 inside the guest.
If your device ID is different (e.g., 7d55 for another Meteor Lake SKU), substitute accordingly everywhere below.
Why Not SR-IOV?
My first thought was SR-IOV. The Meteor Lake GPU reports sriov_totalvfs=7 in sysfs, which would let me create virtual functions and share the GPU across multiple VMs without giving up the host's access. Elegant in theory.
In practice:
i915 0000:00:02.0: driver does not support SR-IOV configuration via sysfsThe i915 driver on kernel 6.17.9 explicitly refuses SR-IOV configuration. There are out-of-tree patches floating around, but I wasn't interested in maintaining a custom kernel just for this. Fortunately, the GPU sits alone in IOMMU group 0, which makes full PCIe passthrough clean — no ACS override hacks, no grouping issues. Good enough.
Step 1: Preparing the Proxmox Host
Enable IOMMU and Release the GPU
The first order of business is making sure IOMMU is active and that the host doesn't claim the GPU before we can hand it to the VM.
/etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt video=efifb:off"
Three additions here:
intel_iommu=on— Explicitly enables Intel IOMMU. On some systems it's passively active, but being explicit avoids surprises.iommu=pt— Passthrough mode for devices not managed by VFIO. This keeps DMA performance normal for everything else (NICs, storage controllers, etc.).video=efifb:off— This one's important. The EFI framebuffer will grab the iGPU at boot, and once it does,vfio-pcican't claim it. Disablingefifbreleases the GPU early enough for VFIO to take over.
Bind the GPU to VFIO
/etc/modprobe.d/vfio.conf (new file):
options vfio-pci ids=8086:7d45
softdep i915 pre: vfio-pciThe first line tells vfio-pci to claim device 8086:7d45. The second is a soft dependency that ensures vfio-pci loads before i915 — otherwise i915 grabs the GPU first and VFIO has nothing to work with.
/etc/modules-load.d/vfio.conf (new file):
vfio
vfio_iommu_type1
vfio_pciThis loads the VFIO stack into the initramfs early enough to intercept the GPU during boot.
Then rebuild:
update-grub
update-initramfs -u -k all
rebootAfter reboot, verify with lspci -nnk -s 00:02.0 — the kernel driver in use should show vfio-pci, not i915.
Step 2: Configuring the VM
Three changes to VM 102:
# Switch from i440fx to q35 machine type (required for PCIe passthrough)
qm set 102 --machine q35
# Pass through the GPU
qm set 102 --hostpci0 0000:00:02.0,pcie=1,rombar=0
# Add a serial console port (this becomes important later)
qm set 102 --serial0 socketThe rombar=0 flag disables the option ROM — Intel iGPUs don't have one, and attempting to load a nonexistent ROM can cause boot hangs.
The serial console is for xterm.js access through the Proxmox web UI. I added it preemptively, and it turned out to be critical — more on that below.
Step 3: Fixing the Network Inside the VM
This one caught me off guard. After switching from i440fx to q35, the VM booted but had no network. The VirtIO NIC moved to a different PCI slot under the q35 topology, so Linux renamed the interface from ens18 to enp6s18.
/etc/netplan/50-cloud-init.yaml:
network:
version: 2
ethernets:
enp6s18:
dhcp4: trueQuick netplan apply and we're back online. Easy fix, but easy to miss — especially if your only access path is SSH.
Step 4: Guest Kernel Parameters — The Tricky Part
This is where most of the debugging time went. The GPU was visible inside the VM (lspci showed it), but getting the right driver to claim it in the right way required some careful tuning.
/etc/default/grub inside the VM:
GRUB_CMDLINE_LINUX_DEFAULT="i915.force_probe=7d45 xe.force_probe=!7d45 i915.disable_display=1 plymouth.enable=0 console=tty0 console=ttyS0,115200n8"
GRUB_TERMINAL="serial console"
GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"Let me break these down, because each one exists for a specific reason:
i915.force_probe=7d45 — The kernel doesn't automatically probe Meteor Lake GPUs with i915 (it prefers the newer xe driver). We need to force i915 to claim it because the VAAPI stack we need (intel-media-va-driver-non-free with the iHD backend) only works with i915, not xe.
xe.force_probe=!7d45 — The flip side: explicitly tell the xe driver to not claim this GPU. Without this, xe and i915 race for the device, and xe usually wins.
i915.disable_display=1 — This was the key insight. We don't need display output from the iGPU — we only need the render engine for transcoding. Disabling display mode prevents i915 from initializing KMS/framebuffer, which avoids a cascade of problems: console freezes, CPU hogging, and conflicts with the virtual VGA adapter. Critically, /dev/dri/renderD128 still works with display disabled.
plymouth.enable=0 — Plymouth (the boot splash) was blocking the login prompt on the serial console. Disabling it cleans up the boot sequence.
console=tty0 console=ttyS0,115200n8 — Dual console output: virtual VGA and serial. The serial console is what xterm.js connects to.
After update-grub and a reboot, the GPU should show up with i915 as its driver, and /dev/dri/renderD128 should exist.
Step 5: Setting Up VAAPI for Plex
With i915 bound correctly, installing the VAAPI stack is straightforward:
apt install vainfo intel-media-va-driver-non-freeVerify it works:
vainfoYou should see the iHD driver and a list of supported profiles including H.264, HEVC, VP9, and AV1 for both encode and decode.
Plex runs as the plex user, which needs access to the render device:
usermod -aG video plexThen set the environment variables so the iHD driver is found. Two places:
/etc/environment (for system-wide access, useful for debugging with vainfo):
LIBVA_DRIVER_NAME=iHD
LIBVA_DRIVERS_PATH=/usr/lib/x86_64-linux-gnu/dri/etc/systemd/system/plexmediaserver.service.d/vaapi.conf (for the Plex service specifically):
[Service]
Environment=LIBVA_DRIVER_NAME=iHD
Environment=LIBVA_DRIVERS_PATH=/usr/lib/x86_64-linux-gnu/driReload systemd, restart Plex, enable "Use hardware acceleration when available" in the Plex transcoder settings, and you're done. Hardware transcoding is live.
Step 6: The noVNC Rabbit Hole
Everything worked great at this point — except the noVNC console in the Proxmox web UI was stuck at early boot messages. The VM ran fine, SSH worked, xterm.js worked, Plex was transcoding happily. But the graphical console was frozen.
What's Actually Happening
Looking at dmesg, the timeline tells the story:
[ 0.759] bochs-drm: deactivate vga console
[ 0.765] fbcon: bochs-drmdrmfb (fb0) is primary device ← bochs_drm owns the console
[ 1.748] drm_fb_helper_damage_work hogged CPU >10000us 4 times
[ 2.255] drm_fb_helper_damage_work hogged CPU >10000us 8 times
[ 7.996] drm_fb_helper_damage_work hogged CPU >10000us 16 times
[ 6.770] i915 loads here ← AFTER bochs_drm already has the consoleThe bochs_drm driver (which powers the virtual VGA for noVNC) takes over the console and then immediately starts choking on its own framebuffer worker — hammering the CPU in a tight loop. This happens before i915 even loads. The Intel driver isn't "stealing" the framebuffer; bochs_drm is tripping over itself in the q35 + PCIe passthrough topology.
What I Tried
Attempt 1: Switch to virtio-gpu. Changed the VM's virtual display adapter from std (bochs) to virtio. Theory: virtio-gpu is paravirtualized and shouldn't conflict with i915.
Result: Catastrophic. The VM booted but systemd hung, SSH never started, and even the serial console (xterm.js) stopped responding. The virtio-gpu driver apparently caused a DRM subsystem conflict that locked up the boot process. Had to force-stop the VM from the Proxmox host and revert with qm set 102 --vga std.
Attempt 2: Module load ordering. The idea was to use softdep rules to force bochs_drm to load after i915, hoping that if i915 initializes first (in headless mode), bochs_drm would behave better when it takes over the console.
echo "softdep bochs_drm pre: i915" | sudo tee /etc/modprobe.d/novnc-fix.conf
echo "softdep bochs pre: i915" | sudo tee -a /etc/modprobe.d/novnc-fix.confThis is safe to try, but the dmesg evidence shows bochs_drm is the one causing the CPU hogging loop independently of i915. Changing who loads first doesn't fix a driver that hangs on its own during early framebuffer initialization. I gave this about 10-15% odds and decided it wasn't worth the risk of further instability.
The Verdict
noVNC is a known casualty of Intel iGPU passthrough on q35 machines. The bochs_drm CPU hogging is a kernel-level issue with this specific hardware topology — not something fixable from userspace without patching the driver.
For a headless media server, this is an acceptable trade-off. SSH and xterm.js provide full console access, and those are what you'd use in production anyway.
The Final Working State
After all the configuration:
- Hardware transcoding: H.264, HEVC/H.265, VP9, AV1 — encode and decode
- GPU inside VM: Intel Arc Graphics (Meteor Lake-P) visible at
01:00.0 - Render device:
/dev/dri/renderD128, owned byplexvia thevideogroup - SSH: Working at
192.168.1.80 - xterm.js: Working via serial console
- noVNC: Stuck at boot messages (cosmetic, does not affect operation)
Revert Guide
If you ever need to undo all of this — maybe you're repurposing the GPU or switching to a different transcoding solution — here's the complete rollback.
Inside the VM (do this first, while SSH still works):
# Revert kernel parameters
sudo nano /etc/default/grub
# Restore GRUB_CMDLINE_LINUX_DEFAULT=""
# Remove GRUB_TERMINAL and GRUB_SERIAL_COMMAND lines
sudo update-grub
# Revert network interface name
sudo nano /etc/netplan/50-cloud-init.yaml
# Change enp6s18 back to ens18
# Remove VAAPI configuration
sudo rm /etc/systemd/system/plexmediaserver.service.d/vaapi.conf
sudo systemctl daemon-reload
sudo nano /etc/environment # Remove LIBVA_* lines
# Remove serial console service
sudo systemctl disable serial-getty@ttyS0.service
# Restore Plymouth
sudo systemctl unmask plymouth-start.service \
plymouth-quit.service \
plymouth-quit-wait.service \
plymouth-read-generator.serviceDon't forget to disable hardware transcoding in the Plex web UI.
On the Proxmox host
# Remove GPU passthrough and serial console from VM config
qm set 102 --delete hostpci0
qm set 102 --delete serial0
qm set 102 --delete machine
# Remove VFIO configuration
rm /etc/modprobe.d/vfio.conf
rm /etc/modules-load.d/vfio.conf
# Revert host kernel parameters
nano /etc/default/grub
# Restore: GRUB_CMDLINE_LINUX_DEFAULT="quiet"
# Rebuild and reboot
update-grub
update-initramfs -u -k all
rebootAfter reboot, i915 reclaims the GPU on the host, the VM starts without passthrough, noVNC works again, and the network interface reverts to ens18.
Emergency Recovery
If the VM won't start:
qm config 102 # Check the config for obvious issues
qm start 102 --skiplockIf the Proxmox host becomes unreachable after the GRUB changes:
- Connect a physical keyboard and monitor
- At the GRUB menu, press e to edit the boot entry
- Remove
intel_iommu=on iommu=pt video=efifb:offfrom the kernel command line - Press Ctrl+X to boot with the temporary change
- Once in, complete the full revert from the steps above
Lessons Learned
i915vsxematters for VAAPI. TheiHDmedia driver (which provides the best VAAPI support for modern Intel GPUs) only works withi915. Ifxeclaims your GPU first, VAAPI initialization will fail silently and you'll be stuck wondering whyvainforeturns nothing useful.i915.disable_display=1is the magic flag for headless passthrough. It prevents a whole class of framebuffer/KMS conflicts while keeping the render engine fully functional. If you're passing through an Intel iGPU for compute or transcoding (not display), this should be in your kernel parameters.- Machine type changes have side effects. Switching from
i440fxtoq35is required for PCIe passthrough, but it reshuffles PCI slot assignments. Your network interface will get renamed. Have a plan for console access before you make the switch. - noVNC breakage is a known trade-off. On q35 with Intel iGPU passthrough, the
bochs_drmvirtual VGA driver tends to choke during early boot. Neither switching tovirtio-gpunor changing module load order reliably fixes it. For headless servers, accept it and move on — xterm.js and SSH are your friends. - Always add a serial console. Before you start messing with display adapters and GPU passthrough, add
--serial0 socketto your VM config. It's a lifeline when everything else breaks, and it costs nothing. - Don't chase perfect when good enough is running in production. The noVNC console was a nice-to-have. Spending hours trying to fix it — and breaking SSH in the process with the virtio-gpu attempt — was a reminder that stability beats completeness for a server that just needs to transcode video.
Discussion