• 0 Posts
  • 5 Comments
Joined 8 months ago
cake
Cake day: August 14th, 2025

help-circle


  • Netplan config? Sure:

    network:
      ethernets:
        enp35s0:
          dhcp4: false
        enp36s0:
          dhcp4: false
      vlans:
        enp35s0.100:
          id: 100
          link: enp35s0
          dhcp4: false
        enp35s0.101:
          id: 101
          link: enp35s0
          dhcp4: false
      bridges:
         br0:
    	   # untagged
           interfaces: [enp35s0]
           dhcp4: false
         br0.100:
    	   # vlan 100
           interfaces: [enp35s0.100]
           dhcp4: false
         br0.101:
    	   #vlan 101
           interfaces: [enp35s0.101]
           dhcp4: true
      version: 2
    

    I’m not sure if the version-property is still required. The only interface with an IP is br0.101. Opnsense provides DHCP (v4).

    You can attach multiple ethernet-devices to a bridge (which I did not):

          br0.100:
            interfaces:
              - enp35s0.100
              - two
    	        - three
    

    I’m not sure if you can attach the docker bridge via netplan - it has to exist at boot time, I think. My docker containers run inside a VM (kvm) with one interface, which sits in one of the VLANs. The VM’s interface is a bridge device (br0.100). The VM ethernet device is attached to the bridge, it receives its IP from the router and behaves like a real server.



  • I don’t think I know the reason for the issue you’ve described. I don’t have enough information for that.

    First thing would be: Is the routing and firewalling OK? Later: DNS. Even later: services reachable?

    The Opnsense instance has configured multiple VLANs and zones too? With one server interface in each? The packets between the vlans take a path via the router?

    I tried to give my server multiple interfaces on different VLANs once, but ran into problems with that approach. I then added one bridge interface per VLAN to the server and gave it just one IP on one vlan. That way the server isn’t tempted to route things itself or deliver packets on a wrong interface. An entire class of possible errors was removed that way. Docker containers and VMs still can have IPs in their respective VLANs/ nets.

    It is worth noting that docker firewalling and ufw don’t play well together, which could be the reason for unreachable services. Moving the docker host into a LXC abstracts the issue away. Incus can run OCI containers itself and may be an alternative to docker (but not docker compose).

    I can’t say anything about over-engeering. It is a hobby after all and you decide what is important and how much complexity you need. :)