No announcement yet.

How To: Direct pass through of USB Z-Wave interface to Hyper-V guest VM. It works!

  • Filter
  • Time
  • Show
Clear All
new posts

    How To: Direct pass through of USB Z-Wave interface to Hyper-V guest VM. It works!

    Starting with Windows server 2016 and continuing in server 2019, Hyper-V is now capable of dismounting a PCI-express host device and assigning it exclusively to a guest VM, allowing that guest direct and un-abstracted access to the physical hardware. This feature is known as discrete device assignment or DDA, it is intended to be used with either GPUs or NVMe storage and is targeted at high performance computing within VMs.

    However if the PCI-e device in question passes relevant checks then in theory devices other than GPUs and NVMe storage can allocated in this way. This is not officially supported by Microsoft but it is possible.

    I have successfully used this to pass a 4 port USB 3.0 PCE-e expansion card through to my HomeSeer VM. This way even though HomeSeer is running on a virtualized operating system, I can still plug USB devices into the ports and have the device appear in the VM as though it were a bare-metal machine. In my case these USB devices are a HomeSeer SmartStick+ UZB Z-Wave interface, and a RFXCOM RFXtrx433XL transceiver.

    Previously I had used a USB-over-Ethernet solution (VirtualHere) to achieve this same effect, but I like this method better. It does not require any additional licensed software, and it does not require additional configuration when adding and removing devices. There is no network stack involved in communicating between HS3 -> Z-wave module. Also, while entirely subjective, the responsiveness of my Z-Wave devices feels slightly improved.

    Some relevant links:

    Personally I followed the guide provided by the links referenced in the workinghardinIT link above, but those links are broken now. I'll re-link them below, however in case they break again, you should be able to find the articles using this tag search:

    A rough outline of the requirements:

    CPU must support the relevant virtualization technologies.
    The host machine chipset/BIOS must support the PCI-e dismounting.
    The PCI-e device itself must support the correct interupt method.
    Available on Windows Server 2016/2019 and their core variants. Standard/Datacenter doesn't seem to matter, I have it working on standard.
    This is a server feature only, it is NOT available in hyper-v on Windows 10.
    I do not believe there is a requirement for a particular guest OS.

    All of these requirements are covered in greater detail in the links already provided. Most importantly, you will need to run this pre-check script to assess the viability of your environment, it will tell you simply whether DDA is possible for each of your currently installed host devices:

    My environment:

    CPU: Intel Xeon E-2146G. This is an entry level Xeon built on Coffee Lake/8th Gen architecture.
    Motherboard: Supermicro X11SCA-F.
    Chipset: Intel C246, this is the required chipset family to support 8th Gen Xeon E-21xx.
    The particular PCI-e USB expansion card:
    (I do not believe the particular card manufacturer is important here, rather it is the NEC chip/code at the heart of it that is significant).
    USB interface:
    Host machine OS: Windows Server 2019, Standard, Core.
    Guest machine OS: Windows Server 2016, Standard, Desktop Experience.

    All other hardware should be agnostic I think.

    Configuration pre-requisites:

    The guest VM must not have dynamic memory enabled.
    The guest VM must be configured with Automatic Stop Action set to "Turn Off'.


    A PCI-e device can only be assigned to one VM at a time.
    The PCI-e device will not be accessible by the host machine while it is assigned to a guest.
    PCI-e devices cannot be hot added or hot removed, the guest will need to be shut down for these operations.
    A VM with an assigned PCI-e device cannot be live migrated to another host. The VM must be shut down and the PCI-e device unassigned before any migration, live or quick, can occur.

    With that all covered, the rough outline is this:

    Idenitfy the PCI address of the device.
    Dismount the device form the host OS.
    Assign the device to a VM.

    Here's how it went for me:

    First I ran the script linked above, and received an affirmative result:

    Click image for larger version

Name:	dda_script_output.PNG
Views:	836
Size:	78.5 KB
ID:	1305358

    Viability of the device has been confirmed and the PCI address identified, so attempt to dismount:

    Click image for larger version

Name:	first-dismount.PNG
Views:	665
Size:	109.0 KB
ID:	1305362

    Whoops, the device has not been disabled in the host OS. Normally you would do this via device manager on a normal desktop, but this is server core, so we must use Powershell.

    First lets get a list of all USB type devices using get-pnpdevice:

    Click image for larger version

Name:	list usb.PNG
Views:	673
Size:	154.3 KB
ID:	1305360

    The 'Renesas' device is the one I want, the other results are the motherboards on-board USB controllers and child devices belonging to the root hubs.

    So I know the name of the device, the first command below searches PnP devices based on friendly name. I confirm that it returns a single unique result.
    In the second command, I pipe that result into the disable-pnpdevice command.

    Click image for larger version

Name:	disablepnp.PNG
Views:	673
Size:	164.2 KB
ID:	1305361

    Now the device is in the disabled state, so retry the dismount command:

    Click image for larger version

Name:	seconddismount.PNG
Views:	663
Size:	213.9 KB
ID:	1305363

    If you've read the articles above, you know that this output is expected. Basically the device is not a GPU or NVMe storage device, so DDA is not officially supported. We override this using the -force parameter:

    Click image for larger version

Name:	finaldismount.PNG
Views:	675
Size:	39.1 KB
ID:	1305359

    Success, no angry red text, the device is dismounted and added to the pool of available assignable devices.

    Finally assign the device to a VM using the command:

    Add-VMAssignableDevice -LocationPath “PCIROOT(0)#PCI(1C02)#PCI(0000)” -VMName Homeseer
    Where 'HomeSeer' is the name of my VM. You'd think I'd remember to screenshot that part too, wouldn't you? Well you'd be wrong. You'll just have to trust me that it worked, first time too. That usually never happens, so it's kind of unsettling when it does...

    Following this, I ran into some problems in the guest VM as I had not properly disabled the VirtualHere service/devices before adding the card. Seems they were both trying to create a USB device with the same name or something. As the guest VM is running windows server with desktop experience, I disabled and removed the VirtualHere USB devices in device manager (hooray for GUIs). I then uninstalled the VirtualHere service.

    Following that I enabled the new USB hub and rebooted for good measure.

    After that, low and behold, my USB devices are present and everything works:

    Click image for larger version

Name:	devices.PNG
Views:	685
Size:	369.7 KB
ID:	1305364

    The screenshot above simply highlights the presence of the USB device alongside the Hyper-V guest components as proof.

    HomeSeer software started without error and both the Z-Wave and RFXCOM interface functioned normally with no changes required, didn't have to reassign COM ports or anything.

    Hopefully someone else will find this useful or interesting.