I am currently running most of my stuff from an unraid box using spare parts I have. It seems like I am hitting my limit on it and just want to turn it into a NAS. Micro PCs/USFF are what I am planning on moving stuff to (probably a cluster of 2 for now but might expand later.). Just a few quick questions:

  1. Running arr services on a proxmox cluster to download to a device on the same network. I don’t think there would be any problems but wanted to see what changes need to be done.

  2. Which micro PCs are you running? I am leaving towards HP prodesk or Lenovo 7xx/9xx series around 200 each. I don’t really plan on getting more than 2-3 and don’t run too many things, but would want enough overhead if I switch stuff over to home assistant and windows and Linux VMs if needed.

  3. Any best practices you recommend when starting a Proxmox cluster? I’ve learned over time it’s best to set it up correctly than try to fix stuff when it’s running. I wish I could coach myself from 7 years ago now. Would of saved a lot of headaches lol.

  • monkinto@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Is there a reason to do this over just giving the nic for the vm/container a vlan tag?

    • DeltaTangoLima@reddrefuge.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      You still need to do that, but you need the Linux bridge interface to have VLANs defined as well, as the physical switch port that trunks the traffic is going to tag the respective VLANs to/from the Proxmox server and virtual guests.

      So, vmbr1 maps to physical interface enp2s0f0. On vmbr1, I have two VLAN interfaces defined - vmbr1.100 (Proxmox guest VLAN) and vmbr1.60 (Phsyical infrastructure VLAN).

      My Proxmox server has its own address in vlan60, and my Proxmox guests have addresses (and vlan tag) for vlan100.

      The added headfuck (especially at setup) is that I also run an OPNsense VM on Proxmox, and it has its own vlan interfaces defined - essentially virtual interfaces on top of a virtual interface. So, I have:

      • switch trunk port
        • enp2s0f0 (physical)
          • vmbr1 (Linux bridge)
            • vmbr1.60 (Proxmox server interface)
            • vmbr1.100 (Proxmox VLAN interface)
              • virtual guest nic (w/ vlan tag and IP address)
            • vtnet1 (OPNsense “physical” nic, but actually virtual)
              • vtnet1_vlan[xxx] (OPNsense virtual nic per vlan)

      All virtual guests default route via OPNsense’s IP address in vlan100, which maps to OPNsense virtual interface vtnet1_vlan100.

      Like I said, it’s a headfuck when you first set it up. Interface-ception.

      The only unnecessary bit in my setup is that my Proxmox server also has an IP address in vlan100 (via vmbr1.100). I had it there when I originally thought I’d use Proxmox firewalling as well, to effectively create a zero trust network for my Proxmox cluster. But, for me, that would’ve been overkill.