I’m thinking about getting started using Docker and an older Raspberry Pi. I’m already hosting a grafana service on it, so It can’t be fully dedicated to ha. So curious what everyone is using.

  • Llamatron@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 days ago

    Up until a couple of weeks I was running it on a dedicated Pi4. It’s now running as a VM in ProxMox on a pair of Lenovo M710q mini PCs I got off ebay for £40 each.

    I did load them up with RAM, upgrade the CPUs and add a second NIC so they probably came in at more like the cost of a 16Gb Pi5. Each. The RAM was the pricey part. I’ve measured the power usage and they each use about a 3rd more power than the Pi did which I’m happy with. Given that, the added flexibility of running ProxMox and how quiet they are I’m super happy with the setup.

    Oh and I used to run PiHole on another Pi. That’s gone now replaced with Technitium DNS running as a pair of VMs too. That was surprisingly easy to do.

    • SayCyberOnceMore@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      It’s now running as a VM in ProxMox on a pair of Lenovo M710q mini PCs

      So, have you got High Availability setup? If so, I’d like to know more about that part…

      • Llamatron@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        So my plan had been to set up a pair of ProxMox hosts, use Ceph to do the shared storage and use HA so VMs could magically move around if a host died. However, I discovered Ceph and HA need a minimum of 3 hosts. HA can be done if you set up a Pi or some other 3rd host that can act as the 3rd vote in the event of a failure but as I didn’t have Ceph I’ve not bothered trying.

        I’ve read Ceph can work on 2 but not well or reliably.

        I might setup a 3rd host some day but it seems a bit of a waste as I just don’t need that amount of resources for what I’m running.

        And I should have known really, I’ve a bit of a background in VMware, albeit at the enterprise level so I’ve never had to even think about 2 or 3 node clusters.

        • GreatAlbatross@feddit.ukM
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          I hit this stumbling block.
          And I don’t quite want to go the whole hog/headache of HA.
          My solution was to run warm-spare: Once a week the VM can be synced to the second box, but never powered on.
          And HA backups are pretty good anyway, it doesn’t take long to bring it back.

  • Dave@lemmy.nz
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    I use a dedicated Raspberry Pi (5, previously had on a 4).

    I host everything else on a different server, the HA one is dedicated. Pretty nice because then it can run HAOS and basically manages everything itself.

    One factor in keeping it separate was I wanted it to be resilient. I don’t want stuff to stop working if I restart my server or if the server dies for some reason. My messing around on my server is isolated from my smart home.

    I also have a separate Pi (4, previously on a Pi 1B) that runs Pi-hole, on it’s own Pi for the same reason - if it stops working or even pauses for a moment, the internet stops working.

    • Ludicrous0251@piefed.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      People throw a lot of shade at the Pi but I love having dedicated hardware for some more critical projects.