NVIDIA Legacy Drivers and Tesla K10

Zaurian

Cadet
Joined
Jan 30, 2023
Messages
2
I am hoping to get an Nvidia Tesla K10 GPU working for PCI Passthrough to VMs on TrueNAS Scale.

During boot or examing journalctl -xe I see the following
Mar 25 18:37:54 truenas kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 239 Mar 25 18:37:54 truenas kernel: NVRM: The NVIDIA Tesla K10 GPU installed in this system is NVRM: supported through the NVIDIA 470.xx Legacy drivers. Please NVRM: visit http://www.nvidia.com/object/unix.html for more NVRM: information. The 515.65.01 NVIDIA driver will ignore NVRM: this GPU. Continuing probe...

I've studied other threads on similar topics with mixed results.

When I download and try to install 470.xx via the .run script I get the following error message:
The NVIDIA driver appears to have been installed previously using a different installer. To prevent potential conflicts, it is recommended either to update the existing installation using the same mechanism by which it was originally installed, or to uninstall the existing installation before installing this driver. Please use the Debian packages instead of the .run file.

My question is pretty straight-forward ... should I even attempt to get NVIDIA 470.xx drivers working with TrueNAS Scale, or is using legacy drivers clearly unsupported?

Additional details below

Code:
Motherboard  ASUSTeK MAXIMUS VI FORMULA
Processor    Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz (iGPU for TrueNAS)
Memory       32 GB (4 x 8GB G.Skill F3-2133C9-8GTX unbuffered Non-ECC DDR3 1333MHz 240pin)
NIC          Intel X540-T2 (2x 10G RJ45)
GPU          NVIDIA Tesla K10 (intended for passthrough to VM)
Code:
admin@truenas[~]$ lsblk | grep disk
sda           8:0    0   3.6T  0 disk
sdb           8:16   0   3.6T  0 disk
sdc           8:32   0 931.5G  0 disk
sdd           8:48   0   3.6T  0 disk
sde           8:64   0 223.6G  0 disk
sdf           8:80   0   3.6T  0 disk
sdg           8:96   0 223.6G  0 disk
sdh           8:112  0   3.6T  0 disk
sdi           8:128  0   3.6T  0 disk
sdj           8:144  0   1.8T  0 disk
sdk           8:160  0   1.8T  0 disk
zd0         230:0    0     1T  0 disk
Code:
admin@truenas[~]$ uname -r
5.15.79+truenas

admin@truenas[~]$ cat /etc/version
22.12.1
Code:
admin@truenas[~]$ sudo dmesg | grep -e DMAR -e IOMMU
[    0.017955] DMAR: IOMMU enabled
[    0.390545] AMD-Vi: AMD IOMMUv2 functionality not available on this system - This is not a bug.
Code:
admin@truenas[~]$ sudo lspci -vnn | grep NVIDIA
03:00.0 3D controller [0302]: NVIDIA Corporation GK104GL [Tesla K10] [10de:118f] (rev a1)
        Subsystem: NVIDIA Corporation GK104GL [Tesla K10] [10de:0970]
04:00.0 3D controller [0302]: NVIDIA Corporation GK104GL [Tesla K10] [10de:118f] (rev a1)
        Subsystem: NVIDIA Corporation GK104GL [Tesla K10] [10de:0970]
Here are some other relevant configuration files
Code:
admin@truenas[~]$ cat /etc/modprobe.d/vfio.conf
options vfio-pci ids=10DE:118F,10DE:118F
admin@truenas[~]$ cat /etc/modprobe.d/nvidia.conf
softdep nouveau pre: vfio-pci
softdep nvidia pre: vfio-pci
softdep nvidia* pre: vfio-pci
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I am hoping to get an Nvidia Tesla K10 GPU working for PCI Passthrough to VMs on TrueNAS Scale.

During boot or examing journalctl -xe I see the following
Mar 25 18:37:54 truenas kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 239 Mar 25 18:37:54 truenas kernel: NVRM: The NVIDIA Tesla K10 GPU installed in this system is NVRM: supported through the NVIDIA 470.xx Legacy drivers. Please NVRM: visit http://www.nvidia.com/object/unix.html for more NVRM: information. The 515.65.01 NVIDIA driver will ignore NVRM: this GPU. Continuing probe...

I've studied other threads on similar topics with mixed results.

When I download and try to install 470.xx via the .run script I get the following error message:
The NVIDIA driver appears to have been installed previously using a different installer. To prevent potential conflicts, it is recommended either to update the existing installation using the same mechanism by which it was originally installed, or to uninstall the existing installation before installing this driver. Please use the Debian packages instead of the .run file.

My question is pretty straight-forward ... should I even attempt to get NVIDIA 470.xx drivers working with TrueNAS Scale, or is using legacy drivers clearly unsupported?

Additional details below

Code:
Motherboard  ASUSTeK MAXIMUS VI FORMULA
Processor    Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz (iGPU for TrueNAS)
Memory       32 GB (4 x 8GB G.Skill F3-2133C9-8GTX unbuffered Non-ECC DDR3 1333MHz 240pin)
NIC          Intel X540-T2 (2x 10G RJ45)
GPU          NVIDIA Tesla K10 (intended for passthrough to VM)
Code:
admin@truenas[~]$ lsblk | grep disk
sda           8:0    0   3.6T  0 disk
sdb           8:16   0   3.6T  0 disk
sdc           8:32   0 931.5G  0 disk
sdd           8:48   0   3.6T  0 disk
sde           8:64   0 223.6G  0 disk
sdf           8:80   0   3.6T  0 disk
sdg           8:96   0 223.6G  0 disk
sdh           8:112  0   3.6T  0 disk
sdi           8:128  0   3.6T  0 disk
sdj           8:144  0   1.8T  0 disk
sdk           8:160  0   1.8T  0 disk
zd0         230:0    0     1T  0 disk
Code:
admin@truenas[~]$ uname -r
5.15.79+truenas

admin@truenas[~]$ cat /etc/version
22.12.1
Code:
admin@truenas[~]$ sudo dmesg | grep -e DMAR -e IOMMU
[    0.017955] DMAR: IOMMU enabled
[    0.390545] AMD-Vi: AMD IOMMUv2 functionality not available on this system - This is not a bug.
Code:
admin@truenas[~]$ sudo lspci -vnn | grep NVIDIA
03:00.0 3D controller [0302]: NVIDIA Corporation GK104GL [Tesla K10] [10de:118f] (rev a1)
        Subsystem: NVIDIA Corporation GK104GL [Tesla K10] [10de:0970]
04:00.0 3D controller [0302]: NVIDIA Corporation GK104GL [Tesla K10] [10de:118f] (rev a1)
        Subsystem: NVIDIA Corporation GK104GL [Tesla K10] [10de:0970]
Here are some other relevant configuration files
Code:
admin@truenas[~]$ cat /etc/modprobe.d/vfio.conf
options vfio-pci ids=10DE:118F,10DE:118F
admin@truenas[~]$ cat /etc/modprobe.d/nvidia.conf
softdep nouveau pre: vfio-pci
softdep nvidia pre: vfio-pci
softdep nvidia* pre: vfio-pci


If you do PCIe pass-thru to a VM, SCALE doesn't need a special driver.

The VM needs the driver.
 

Zaurian

Cadet
Joined
Jan 30, 2023
Messages
2
If you do PCIe pass-thru to a VM, SCALE doesn't need a special driver.

The VM needs the driver.
Thank you for the speedy response, morganL.

As it turns out, I discovered that my i7-4770K processor doesn't support VT-d so I won't be able to virtualize IOMMU. I have read other threads here about people trying to get legacy NVIDIA drivers installed for passthrough purposes; perhaps those were misguided conversations.

If Scale doesn't need a driver, is it safe to assume that if I swap out this processor for another that does support IOMMU (and I know that my motherboard does also support it) then I should be able to pass this GPU through to a VM, right? It sounds like the fact that it requires old drivers should only be relevant to the guest.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Thank you for the speedy response, morganL.

As it turns out, I discovered that my i7-4770K processor doesn't support VT-d so I won't be able to virtualize IOMMU. I have read other threads here about people trying to get legacy NVIDIA drivers installed for passthrough purposes; perhaps those were misguided conversations.

If Scale doesn't need a driver, is it safe to assume that if I swap out this processor for another that does support IOMMU (and I know that my motherboard does also support it) then I should be able to pass this GPU through to a VM, right? It sounds like the fact that it requires old drivers should only be relevant to the guest.
Yes, that should be the case.
 
Top