Apologies for the assumptions below, maybe you already know everything … but here are some of my thoughts:
Get hardware pass through working and assign dedicated hardware that is supported. Get one or two videos cards and dedicate at least one properly supported amd video card to your main dev virtual machine. If you have a second video card you can have windows running as a vm and dedicate an amd video card to that machine just in case you need to swap over.
So basically:
- HOST
You host os is for virtualization only and NOTHING else. Minimum packages basic config setup.
On your host you setup hardware passthrough.
- MULTIPLE PRIMARY DEV OS VIRTUALIZED ENVIRONMENTS
If you need to run Win / Lin simultaneously on one box, will want probably two videos cards to dedicate to Windows and Linux virtual machines via hardware passthrough.
There are a bunch of open source (and closed source scamware) to share your keyboard / mouse / touchpad / joystick across whatever platform you need it on.
I don’t have any recommendations, the last non-free platform in my ecosystem is my Android and that is going to go bye bye soon.
- SINGLE DEV ENVIRONMENT
Once you get hardware passthrough working, you will go a bit crazy with what you can do in a fully paravirtualized workflow, especially once you kinda get comfortable. Basically, once you go paravirtualized workflow you don’t go back.
Anyway, eventually most people just settle down on one main environment and a bunch of others dedicated to specific tasks. You will probably want some testing servers running concurrently on the same box, ram and storage is cheap.
And eventually, you will go from a bunch of virtual machines to a few or less unless work demands / workflow demands change.
- MONITOR SITUATION
My recommendation is dedicated video cards for each guest os that will need to use as a desktop with hardware passthrough and just switch the source on your monitors. That is the least amount of trouble and you are completely isolated from your host machine. All non desktop virtual machines don’t need access to video cards as you will probably just ssh into them.
The core idea is that you never want to touch your host machine except for upgrades, because too much can go wrong. You can always snapshot your work environments and if you screw them up you can roll back from host. Basic risk management.
However, if you like to live stupid, there is an open source project that allows you to:
a) Do full video card passthrough to your vm guest so you get full video acceleration on the guest.
b) Work on your host and open up your guest in a window by COPYING THE RAM FROM THE VIDEO CARD. Similar to your vnc approach but basically the host accesses the video ram from the vm and renders it locally on the host for you so you get your vm in a local window with full accelration that the vm can do via access to the passthrough video card. I forget the name of the software, but if you search the Level1 youtube channels they have great demos of the software and great explanations of how this works. It’s open source.
Of course, the problem is you are working on your host os in a window and you won’t be able to resist fiddling with your host os. This will eventually result in an update or a screw up that will hose your virtual machines and you will then have to pay the price of dogfooding it raw. My advice is as per above: simplify, invest in known supported amd video cards, pass those through to your vm and just switch the source on your monitor to work in your new environment. Never touch your host os the same way you don’t fiddle with your ESXI setup without being serious about the maintenance involved.
Otherwise, welcome to the future. All the cool kids are doing hardware passthrough.