Have I gone down the wrong path?

To kick things off, my current setup is:

  • Windows Server Standard 2022
  • i7-6700k, 32gb RAM
  • Unbuntu installed on Hyper-V
  • Docker installed on Hyper-V
  • Docker containers managed by a single docker-compose file.
  • Contains are plenty, such as:
    • Portainer
    • Plex
    • Sonar, Radarr, Lidarr, unpackr, prowlarr, etc
    • Deluge
    • Authelia and NPM
    • Gluetun
    • Organizr
    • Various others (wireguard, ddns etc).
  • Hard drives accessed via mounts in ubuntu's fstab

The reason I went with Hyper-V is that I already had an old work server which I repurposed for it. Had other VM's but now I just use the Ubuntu container.

I am running into issues in passing through devices - an example is graphics card. I can't seem to pass through a GPU (either Nviidia or iGPU) into the Ubuntu VM so that I can use it for transcoding with unmanic (or similar).

Everything feels so clunky, and when I run into errors (like when mounting fails), I feel as though I am fixing it via bandaids.

Whilst Ubuntu is my only VM, I'd like the option to run others, if needed.

So I'm at a cross-roads as to whether I should look at backing everything up that's needed, and starting fresh.

An option I have been toying with is removing Hyper-V altogether, installing something like Ubuntu Server LTS on the base machine, and then installing KVM within that. And then having my docker either installed on that Ubuntu or within another Ubuntu VM.

For Docker, I use a single docker-compose.yml, which is giant at this stage, some 900 lines long. It's an old file that I have for a while that I've updated as needed. While it's giant, it's pretty tidy(ish). But is this really the best way for me to manage my docker installation - should I be doing it via portainer as an example?

Help would really be appreciated.

Thanks!